Welcome to servicecomb-service-center’s documentation!

Introductions

Introduction

What is ServiceComb Service Center

Apache ServiceComb Service-Center is a Restful based service-registry that provides micro-services discovery and micro-service management. It is based on Open API format and provides features like service-discovery, fault-tolerance, dynamic routing, notify subscription and scalable by design. It has high performance cache design and separate entity management for micro-services and their instances. It provides out of box support for metrics and tracing. It has a web portal to manage the micro-services.

Why use ServiceComb Service Center

ServiceCenter is a service registry. Like other service registry, its main role is to solve the problem of service registration and discovery, that is the problem of dynamic routing. At the same time, in order to better solve the problem of cross-team collaboration, it adds support for contract (based on OpenAPI specifications) services. If it is used with contract tools (Toolkit) or Java microservice development kit (Java Chassis), communication Interfaces will become transparent, allowing users to focus on business development.

Service Center Commands

scctl

scctl enables user to view the list of MicroServices registered in service center(version 1.1.0+). You can view all the commands from here

QuickStart Guide
Install

Easiest way to get started with scctl is to download the release from here and then untar/unzip it based on your OS.

Check the version

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

scctl.exe version

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./scctl version

Note: If you already bootstrap SC and listen on 127.0.0.1:30100, this command will also print the SC version.

Running scctl from source code

Requirements

  • Go version 1.8+ is required to build the latest version of scctl.

However if you want to try our latest code then you can follow the below steps

#Make sure your GOPATH is set correctly and download all the vendors of SC
git clone https://github.com/apache/servicecomb-service-center.git $GOPATH/src/github.com/apache/servicecomb-service-center
cd $GOPATH/src/github.com/apache/servicecomb-service-center

cd scctl

go build

Windows:

scctl.exe version

Linux:

./scctl version

Get started

Quick Start

Getting Service Center

The easiest way to get Service Center is to use one of the pre-built release binaries which are available for Linux, Windows and Docker.

Running Service Center using the Release

You can download our latest release from ServiceComb Website.When you get these release, you can execute the start script to run Service Center.

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

start-service-center.bat

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./start-service-center.sh

Docker:

docker pull servicecomb/service-center
docker run -d -p 30100:30100 servicecomb/service-center

Note: The Releases of Service-Center uses emebeded etcd, if you want to use the seperate instance of etcd then you can deploy the etcd seperately and configure the etcd ip over here.

vi conf/app.conf

## Edit this file
# registry address
# 1. if registry_plugin equals to 'embedded_etcd'
# manager_name = "sc-0"
# manager_addr = "http://127.0.0.1:2380"
# manager_cluster = "sc-0=http://127.0.0.1:2380"
# 2. if registry_plugin equals to 'etcd'
# manager_cluster = "127.0.0.1:2379"
manager_cluster = "127.0.0.1:2379"

By default the SC comes up on 127.0.0.1:30100, however you can change the configuration of these address over here.

vi conf/app.conf

httpaddr = 127.0.0.1
httpport = 30100

Building & Running Service-Center from source

Requirements

  • Go version 1.8+ is required to build the latest version of Service-Center.

Download the Code

git clone https://github.com/apache/servicecomb-service-center.git $GOPATH/src/github.com/apache/servicecomb-service-center
cd $GOPATH/src/github.com/apache/servicecomb-service-center

Dependencies

you can download dependencies directly using command go mod. Please follow below steps to download all the dependency.

# greater than go1.11
GO111MODULE=on go mod download
GO111MODULE=on go mod vendor

Build the Service-Center

go build -o service-center github.com/apache/servicecomb-service-center/cmd/scserver

First, you need to run a etcd(version: 3.x) as a database service and then modify the etcd IP and port in the Service Center configuration file (./etc/conf/app.conf : manager_cluster).

wget https://github.com/coreos/etcd/releases/download/v3.1.8/etcd-v3.1.8-linux-amd64.tar.gz
tar -xvf etcd-v3.1.8-linux-amd64.tar.gz
cd etcd-v3.1.8-linux-amd64
./etcd

cd $GOPATH/src/github.com/apache/servicecomb-service-center
cp -r ./etc/conf .
./service-center

This will bring up Service Center listening on ip/port 127.0.0.1:30100 for service communication.If you want to change the listening ip/port, you can modify it in the Service Center configuration file (./conf/app.conf : httpaddr,httpport).

Running Frontend using the Release

You can download our latest release from ServiceComb Website and then untar it and run start-frontend.sh/start-frontend.bat. This will bring up the Service-Center UI on http://127.0.0.1:30103.

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

start-frontend.bat

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./start-frontend.sh

Note: By default frontend runs on 127.0.0.1, if you want to change this then you can change it in conf/app.conf.

frontend_host_ip=127.0.0.1
frontend_host_port=30103

You can follow the guide over here to run the Frontend from source.

User Guides

Deploying Service-Center

Deploying Service-Center in Cluster Mode

As Service-center is a stateless application so it can be seamlessly deployed in cluster mode to achieve HA. SC is dependent on the etcd to store the microservices information so you can opt for running etcd standalone or in cluster mode. Once you are done with installing the etcd either in cluster or standalone mode then you can follow the below steps to run the Service-Center.

Let’s assume you want to install 2 instances of Service-Center on VM with following details

Name Address
VM1 10.12.0.1
VM2 10.12.0.2

Here we assume your etcd is running on http://10.12.0.4:2379 (you can follow this guide to install etcd in cluster mode.)

Step 1

Download the SC release from here on all the VM’s.

# Untar the release
# tar -xvf service-center-X.X.X-linux-amd64.tar.gz

Note: Please don’t run start.sh as it will also start the etcd.

Step 2

Edit the configuration of the ip/port on which SC will run and etcd ip #### VM1

# vi conf/app.conf
#Replace the below values
httpaddr = 10.12.0.1
manager_cluster = "10.12.0.4:2379"

# Start the Service-center
./service-center
VM2
# vi conf/app.conf
#Replace the below values
httpaddr = 10.12.0.2
manager_cluster = "10.12.0.4:2379"

# Start the Service-center
./service-center

Note: In manger_cluster you can put the multiple instances of etcd in the cluster like

manager_cluster= "10.12.0.4:2379,10.12.0.X:2379,10.12.0.X:2379"
Step 3

Verify your instances

# curl http://10.12.0.1:30101/v4/default/registry/health
{
    "instances": [
        {
            "instanceId": "d6e9e976f9df11e7a72b286ed488ff9f",
            "serviceId": "d6e99f4cf9df11e7a72b286ed488ff9f",
            "endpoints": [
                "rest://10.12.0.1:30100"
            ],
            "hostName": "service_center_10_12_0_1",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1516012543",
            "modTimestamp": "1516012543"
        },
        {
            "instanceId": "16d4cb35f9e011e7a58a286ed488ff9f",
            "serviceId": "d6e99f4cf9df11e7a72b286ed488ff9f",
            "endpoints": [
                "rest://10.12.0.2:30100"
            ],
            "hostName": "service_center_10_12_0_2",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1516012650",
            "modTimestamp": "1516012650"
        }
    ]
}

As we can see here the Service-Center can auto-discover all the instances of the Service-Center running in cluster, this auto-discovery feature is used by the Java-Chassis SDK to auto-discover all the instances of the Service-Center by knowing atleast 1 IP of Service-Center running in cluster.

In your microservice.yaml you can provide the SC IP of both the instance or any one instance, sdk can auto-discover other instances and use the other instances to get microservice details in case of failure of the first one.

cse:
  service:
    registry:
      address: "http://10.12.0.1:30100,http://10.12.0.2:30100"
      autodiscovery: true

In this case sdk will be able to discover all the instances of SC in cluster.

Setup SSL/TLS

Requirement

Service center(SC) takes several files related SSL/TLS options.

  1. Environment variable ‘SSL_ROOT’: The directory contains certificates. If not set, uses ‘etc/ssl’ under the SC work directory.
  2. $SSL_ROOT/trust.cer: Trusted certificate authority.
  3. $SSL_ROOT/server.cer: Certificate used for SSL/TLS connections to SC.
  4. $SSL_ROOT/server_key.pem: Key for the certificate. If key is encrypted, ‘cert_pwd’ must be set.
  5. $SSL_ROOT/cert_pwd(optional): The password used to decrypt the private key.

Configuration

Please modify the conf/app.conf before start up SC

  1. ssl_mode: Enabled SSL/TLS mode. [0, 1]
  2. ssl_verify_client: Whether the SC verify client(including etcd server). [0, 1]
  3. ssl_min_version: Minimal SSL/TLS protocol version. [”TLSv1.0”, “TLSv1.1”, “TLSv1.2”, “TLSv1.3”], based on Go version
  4. ssl_ciphers: A list of cipher suite. By default, uses TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256

Data Source

Service-Center support multiple DB configurations. Configure app.yaml according to your needs.

registry:
  # buildin, etcd, embedded_etcd, mongo
  kind: etcd
  # registry cache, if this option value set 0, service center can run
  # in lower memory but no longer push the events to client.
  cache:
    mode: 1
    # the cache will be clear after X, if not set cache will be never clear
    ttl:
  # enabled if registry.kind equal to etcd or embedded_etcd
field description required value
registry.kind database type (etcd or mongo) yes etcd / embedded_etcd /mongo
registry.cache.mode open cache (1 is on, 0 is off) yes 1 / 0
registry.cache.ttl cache timeout (if not set cache will be never clear) no an integer time, like 30s/20m/10h

Etcd

Download the etcd according to your own environment. Etcd Installation package address.

Configure app.yaml according to your needs.

etcd:
  # the interval of etcd health check, aggregation conflict check and sync loop
  autoSyncInterval: 30s
  compact:
    # indicate how many revision you want to keep in etcd
    indexDelta: 100
    interval: 12h
  cluster:
    # if registry_plugin equals to 'embedded_etcd', then
    # name: sc-0
    # managerEndpoints: http://127.0.0.1:2380"
    # endpoints: sc-0=http://127.0.0.1:2380
    # if registry_plugin equals to 'etcd', then
    # endpoints: 127.0.0.1:2379
    endpoints: 127.0.0.1:2379
  # the timeout for failing to establish a connection
  connect:
    timeout: 10s
  # the timeout for failing to read response of registry
  request:
    timeout: 30s
field description required value
registry.etcd.autoSyncInterval synchronization interval yes an integer time, like 30s/20m/10h
registry.etcd.compact.indexDelta version retained in etcd yes a 64 bit integer, like 100
registry.etcd.compact.interval compression interval yes an integer time, like 30s/20m/10h
registry.etcd.cluster.endpoints endpoints address yes string, like 127.0.0.1:2379
registry.etcd.connect.timeout the timeout for establishing a connection yes an integer time, like 30s/20m/10h
registry.etcd.request.timeout request timeout yes an integer time, like 30s/20m/10h

Download the installation package according to the environment information

  1. Download etcd package.
  2. Unzip, modify the configuration and start etcd.
  3. Download the latest release from ServiceComb Website.
  4. Decompress, modify /conf/app.yaml.
  5. Execute the start script to run service center

Mongodb

Download the mongodb according to your own environment. Mongodb Installation package address.

Configure app.yaml according to your needs.

mongo:
  cluster:
    uri: mongodb://localhost:27017
    sslEnabled: false
    rootCAFile: /opt/ssl/ca.pem
    verifyPeer: false
    certFile: /opt/ssl/client.crt
    keyFile: /opt/ssl/client.key
field description required value
registry.mongo.cluster.uri mongodb server address yes string, like mongodb://localhost:27017
registry.mongo.cluster.sslEnabled ssl enabled / not enabled yes false / true
registry.mongo.cluster.rootCAFile if sslEnabled equal true, should set CA file path yes string, like /opt/ssl/ca.pem
registry.mongo.cluster.verifyPeer insecure skip verify yes false / true
registry.mongo.cluster.certFile the cert file path need to be set according to the configuration of mongodb server no string, like /opt/ssl/client.crt
registry.mongo.cluster.keyFile the key file path need to be set according to the configuration of mongodb server no string, like /opt/ssl/client.key

Download the installation package according to the environment information

  1. Download mongodb package.
  2. Unzip, modify the configuration and start mongodb. Mongodb configure ssl.
  3. Download the latest release from ServiceComb Website.
  4. Decompress, modify /conf/app.yaml.
  5. Execute the start script to run service center

Quota management

Resources

  • service: microservice version quotas.
  • instance: instance quotas.
  • schema: schema quotas for each microservice.
  • tag: tag quotas for each microservice.
  • account: account quotas.
  • role: role quotas.

How to configure

1. Use configuration file

edit conf/app.yaml

quota:
  kind: buildin
  cap:
    service:
      limit: 50000
    instance:
      limit: 150000
    schema:
      limit: 100
    tag:
      limit: 100
    account:
      limit: 1000
    role:
      limit: 100
2. Use environment variable
  • QUOTA_SERVICE: the same as the config key quota.cap.service.limit
  • QUOTA_INSTANCE: the same as the config key quota.cap.instance.limit
  • QUOTA_SCHEMA: the same as the config key quota.cap.schema.limit
  • QUOTA_TAG: the same as the config key quota.cap.tag.limit
  • QUOTA_ACCOUNT: the same as the config key quota.cap.account.limit
  • QUOTA_ROLE: the same as the config key quota.cap.role.limit

Limits

Exceed the limits may cause internal errors or performance degradation.

Http Server

  • Request head size: 3KB
  • Request body size: 2048KB

Microservice

  • Metadata size: 5KB
  • Schema content size: 2048KB
  • Properties size: 3KB

Instance

  • Metadata size: 5KB
  • Properties size: 3KB

Metrics


How to export the metrics

Service-Center is compatible with the Prometheus standard. By default, the full metrics can be collected by accessing the /metrics API through the 30100 port.

If you want to customize the metrics configuration.

metrics:
  enable: true # enable to start metrics gather
  interval: 30s # the duration of collection
  exporter: prometheus # use the prometheus exporter
  prometheus:
    # optional, listen another ip-port and path if set, e.g. http://127.0.0.1:80/other
    listenURL:

Summary

FamilyName: service_center

Server

|metric|type|description| |:—|:—:|:—| |http_request_total|counter|The total number of received service requests.| |http_success_total|counter|Total number of requests responding to status code 2xx or 3xx.| |http_request_durations_microseconds|summary|The latency of http requests.| |http_query_per_seconds|gauge|TPS of http requests.|

Pub/Sub

|metric|type|description| |:—|:—:|:—| |notify_publish_total|counter|The total number of instance events.| |notify_publish_durations_microseconds|summary|The latency between the event generated in ServiceCenter and received by the client.| |notify_pending_total|counter|The total number of pending instances events.| |notify_pending_durations_microseconds|summary|The latency of pending instances events.| |notify_subscriber_total|counter|The total number of subscriber, e.g. Websocket, gRPC.|

Meta

|metric|type|description| |:—|:—:|:—| |db_heartbeat_total|counter|The total number of received instance heartbeats.| |db_heartbeat_durations_microseconds|summary|The latency of received instance heartbeats.| |db_domain_total|counter|The total number of domains.| |db_service_total|counter|The total number of micro-services.| |db_service_usage|gauge|The usage percentage of service quota.| |db_instance_total|counter|The total number of instances.| |db_instance_usage|gauge|The usage percentage of instances.| |db_schema_total|counter|The total number of schemas.| |db_framework_total|counter|The total number of SDK frameworks.|

Backend

|metric|type|description| |:—|:—:|:—| |db_backend_event_total|counter|The total number of received backend events, e.g. etcd, Mongo.| |db_backend_event_durations_microseconds|summary|The latency between received backend events and finish to build cache.| |db_dispatch_event_total|counter|The total number of dispatch events to resource handlers.| |db_dispatch_event_durations_microseconds|summary|The latency between received backend events and finish to dispatch.|

System

|metric|type|description| |:—|:—:|:—| |db_sc_total|counter|The total number of ServiceCenter instances.| |process_resident_memory_bytes||| |process_cpu_seconds_total||| |process_cpu_usage||| |go_threads||| |go_goroutines|||

Tracing

Report trace data

Edit the configuration of the tracing plugin
trace_plugin='buildin' # or empty
To zipkin server
_images/tracing-server.PNG
Add the zipkin server endpoint
# Export the environments
export TRACING_COLLECTOR=server
export TRACING_SERVER_ADDRESS=http://127.0.0.1:9411 # zipkin server endpoint

# Start the Service-center
./servicecenter
To file
_images/tracing-file.PNG
Customize the path of trace data file
# Export the environments
export TRACING_COLLECTOR=file
export TRACING_FILE_PATH=/tmp/servicecenter.trace # if not set, use ${work directory}/SERVICECENTER.trace

# Start the Service-center
./servicecenter

Heartbeat

Heartbeat configuration. Configure app.yaml according to your needs.

heartbeat:
  # configuration of websocket long connection
  websocket:
    pingInterval: 30s
  # heartbeat.kind="checker or cache"
  # if heartbeat.kind equals to 'cache', should set cacheCapacity,workerNum and taskTimeout
  # capacity = 10000
  # workerNum = 10
  # timeout = 10
  kind: cache
  cacheCapacity: 10000
  workerNum: 10
  timeout: 10
field description required value
heartbeat.websocket.pingInterval websocket ping interval. yes like 30s
heartbeat.kind there are two types of heartbeat plug-ins. With cache and without cache. yes cache/checker
heartbeat.cacheCapacity cache capacity yes a integer, like 10000
heartbeat.workerNum the number of working cooperations yes a integer, like 10
heartbeat.timeout processing task timeout (default unit: s) yes a integer, like 10

RBAC

you can choose to enable RBAC feature, after enable RBAC, all request to service center must be authenticated

Configuration file

Follow steps to enable this feature.

1.get rsa key pairs

openssl genrsa -out private.key 4096
openssl rsa -in private.key -pubout -out public.key

2.edit app.yaml

rbac:
  enable: true
  privateKeyFile: ./private.key # rsa key pairs
  publicKeyFile: ./public.key # rsa key pairs
auth:
  kind: buildin # must set to buildin

3.root account

before you start server, you need to set env to set your root account password. Please note that password must conform to the following set of rules: have at least 8 characters, have at most 32 characters, have at least one upper alpha, have at least one lower alpha, have at least one digit and have at lease one special character.

export SC_INIT_ROOT_PASSWORD='P4$$word'

At the first time service center cluster init, it will use this password to set up rbac module. you can revoke password by rest API after a cluster started. but you can not use SC_INIT_ROOT_PASSWORD to revoke password after a cluster started.

the initiated account name is fixed as “root”

To securely distribute your root account and private key, you can use kubernetes secret

Generate a token

Token is the only credential to access rest API, before you access any API, you need to get a token from service center

curl -X POST \
  http://127.0.0.1:30100/v4/token \
  -d '{"name":"root",
"password":"P4$$word"}'

will return a token, token will expire after 30m

{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTI4MzIxODUsInVzZXIiOiJyb290In0.G65mgb4eQ9hmCAuftVeVogN9lT_jNg7iIOF_EAyAhBU"}

Authentication

in each request you must add token to http header:

Authorization: Bearer {token}

for example:

curl -X GET \
  'http://127.0.0.1:30100/v4/default/registry/microservices/{service-id}/instances' \
  -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTI4OTQ1NTEsInVzZXIiOiJyb290In0.FfLOSvVmHT9qCZSe_6iPf4gNjbXLwCrkXxKHsdJoQ8w' 

Change password

You must supply a current password and token to update to new password

curl -X POST \
  http://127.0.0.1:30100/v4/account/root/password \
  -H 'Authorization: Bearer {your_token}' \
  -d '{
	"currentPassword":"P4$$word",
	"password":"P4$$word1"
}'

create a new account

You can create new account named “peter”, and his role is developer. How to add roles and allocate resources please refer to next section.

curl -X POST \
  http://127.0.0.1:30100/v4/account \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
	"name":"peter",
	"roles":["developer"],
	"password":"{strong_password}"
}'

Resource

All APIs of the ServiceComb system is mapping to a resource type. resource is list as below:

  • service: permission to discover, register service and instance
  • governance: permission to manage traffic control policy, such as rate limiting
  • service/schema: permission to register and discover contract
  • account: permission to manage accounts and account-locks
  • role: permission to manage roles
  • ops: permission to access admin API

declare a resource type that account can operate:

 {
  "resources": [
    {
      "type": "service"
    },
    {
      "type": "service/schema"
    }
  ]
}

Label

Define resource(only service resource) scope:

  • serviceName: specify service name
  • appId: specify which app that services belongs to
  • environment: specify env of the service
{
  "resources": [
    {
      "type": "service",
      "labels": {
        "serviceName": "order-service",
        "environment": "production"
      }
    },
    {
      "type": "service",
      "labels": {
        "serviceName": "order-service",
        "environment": "acceptance"
      }
    }
  ]
}

Verbs

Define what kind of action could be applied to a resource by an account, has 4 kinds:

  • get
  • delete
  • create
  • update

declare resource type and action:

{
  "resources": [
    {
      "type": "service"
    },
    {
      "type": "account"
    }
  ],
  "verbs": [
    "get"
  ]
}

Roles

Two default roles are provided after RBAC init:

  • admin: can operate account and role resource
  • developer: can operate any resource except account and role resource

each role include perms elements to indicates what kind of resource can be operated by this role, for example:

A role “TeamA” can get and create any services but can only delete or update “order-service”

{
  "name": "TeamA",
  "perms": [
    {
      "resources": [
        {
          "type": "service"
        }
      ],
      "verbs": [
        "get",
        "create"
      ]
    },
    {
      "resources": [
        {
          "type": "service",
          "labels": {
            "serviceName": "order-service"
          }
        }
      ],
      "verbs": [
        "update",
        "delete"
      ]
    }
  ]
}

create new role and how to use

You can also create a new role and give perms to this role.

  1. You can add new role and allocate resources to new role. For example, a new role named “tester” and allocate resources to “tester”.
curl -X POST \
  http://127.0.0.1:30100/v4/role \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "TeamA",
  "perms": [
    {
      "resources": [
        {
          "type": "service"
        }
      ],
      "verbs": [
        "get",
        "create"
      ]
    },
    {
      "resources": [
        {
          "type": "service",
          "labels": {
            "serviceName": "order-service"
          }
        }
      ],
      "verbs": [
        "update",
        "delete"
      ]
    }
  ]
}'

2.then, assigning roles “tester” and “tester2” to user account “peter”, “tester2” is a empty role has not any resources.

curl -X POST \
  http://127.0.0.1:30100/v4/account \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
	"name":"peter",
	"password":"{strong_password}",
	"roles": ["TeamA"]
}'

3.Next, generate a token for the user.

curl -X POST \
  http://127.0.0.1:30100/v4/token \
  -d '{
  	"name":"peter",
  	"password":"{strong_password}"
  }'

4.finally, user “peter” carry token to access resources.

for example

curl -X POST \
  http://127.0.0.1:30100/v4/default/registry/microservices \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {peter_token}' \
  -d '{
        "service": {
          "serviceId": "11111-22222-33333",
          "appId": "test",
          "serviceName": "test",
          "version": "1.0.0"
        }
}'

would be ok.

curl -X DElETE \
  http://127.0.0.1:30100/v4/default/registry/microservices \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {peter_token}' 

has no permission to operate.

Fast Registration

Fast registration feature, can support millions of instance registration.

This feature is primarily used in scenarios where an ultra-high performance registry is required and is not recommended for scenarios where performance requirements are low or instance levels are small.

This feature is turn off by default, if need fast register you should open the fast registration switch.

When this feature is open, you can call register interface API, the service center will put instances in the queue, then direct return instanceId to users, at last, register registry asynchronously by timing task.

QuickStart Guide

1.Config the fast registration queue size, to enable fast registration

If queueSize is bigger than 0, fast registration will trigger

The default configuration of /conf/app.yaml is as follows:

register:
  fastRegistration:
    # this config is only support in mongo case now
    # if fastRegister.queueSize is > 0, enable to fast register instance, else register instance in normal case
    # if fastRegister is enabled, instance will be registered asynchronously,
    # just put instance in the queue and return instanceID, and then register through the timing task
    queueSize: 0

Config queueSize in /conf/app.yaml, for example set queueSize to 50w

register.fastRegistration.queueSize=500000

2.Start service center

./service-center

3.Call the registry interface

Call the registry interface, you will receive InstanceID soon, now a fast registration has been completed once

  • Registered instance APIs can be called concurrently
  • There is a slight delay between returning the instanceID and actually registering the instance to database, but 100w instance registration delays are within seconds
  • If more than 15 minutes did not discovery the instance ,there may be a problem with the environment. The client can register again with the InstanceID that has been generated and return to user

Process Design

The flow chart is as follows:

_images/fast_register_design.pngregister_image

Normal Case:

If the fast registration is enabled, it is put in the queue and eventually registered to MongoDB in batch by timed tasks(The time interval is 100 millimeters)

Abnormal Case:

  1. If the connection between Mongo and service center is broken, and the registration fails, the instance will be put into the failure queue and registered again
  2. If the registration fails for 3 consecutive times, the fuse will be cut off for 5s and resume after successful registration
  3. If a single instance fails to register for more than 500 times, the instance will be discarded, and the SDK will register again when the heartbeat finds that the instance does not exist

Attention

1.The database with ETCD scenario does not have this feature; only the Mongo database scenario does

2.Because the registration is asynchronous, there will be a certain amount of delay in the registration, the delay is basically in the second level

Performance Test

The performance of a fast registration instance is about three times better than that of a normal registration

best performance test:

|service center| mongoDB | concurrency|tps |latency|queueSize| |—-| —-| —-|—-|—-|—-| |8u16g2|16u32g|200|9w |1mm|100w| |16u32g2|16u32g|500|15w|2mm|100w|

ServiceComb Turbo(experimental)

High performance service center running mode, it leverages high performance codec and http implemantation etc to gain better performance.

How to enable

edit conf/chassis.yaml

servicecomb:
  codec:
    plugin: gccy/go-json
  protocols:
    rest:
      listenAddress: 127.0.0.1:30106

edit conf/app.yaml

server:
  turbo: true

service center and etcd deployed in local host

Resource Consumption:
  • 2 cpu cores, 4 threads
  • 8 GB memory
  • SSD
  • concurency 10
  • virtual box, ubuntu 20.04
Topology:

service center and etcd deployed in local host, even run benchmark tool in same host, so the performance is affected by benchmark tool

Report

| API | No Turbo | Turbo | |—————————|———-|——–| | register growing instance | 603/s | 826/s | | register same instance | 4451/s | 7178/s | | heartbeat one instance | 6121/s | 9013/s | | find one instance | 6295/s | 8748/s | | find 100 instance | 2519/s | 3751/s || find 1000 instance | 639/s | 871/s |

Syncer

Service-Center supports synchronization. If you want to use synchronization, you can refer to the step.

preparation before installation

download package
Note: Only the 2.1+ version of sc supports synchronization
deployment Architecture

As shown in the figure below, etcd can be deployed as an independent cluster.

_images/syncer-deploy-architecture.png

It can also be deployed like this.

_images/syncer-deploy-architecture-2.png

_images/vm-deploy.png

installation operation

install etcd

Refer to the official website documentation.

install sc
Note: Only the 2.1+ version of sc supports synchronization
step 1

modify the files in conf

app.conf: modify frontend_host_ip and httpaddr to the local ip address

_images/app-conf.png

app.yaml:

modify

  1. server.host
  2. REGISTRY_KIND
  3. REGISTRY_ETCD_CLUSTER_NAME
  4. REGISTRY_ETCD_CLUSTER_MANAGER_ENDPOINTS
  5. REGISTRY_ETCD_CLUSTER_ENDPOINTS
  6. registry.instance.datacenter.name
  7. registry.instance.datacenter.region
  8. registry.instance.datacenter.availableZone

_images/server-host.png

_images/app-yaml.png

_images/instance-az.png

chassis.yaml: modify listenAddress to the local ip address

_images/chassis.png

syncer.yaml: turn on the enableOnStart switch, and modify endpoints, the sc machine ip in region-2

step 2

Repeat the above operation to modify the configuration of sc on other machines.

step 3
sh start-service-center.sh
step 4
sh start-frontend.sh
step 5

Open the front-end interface of any node.

_images/front-1.png

Instances in the peer region have been synchronized.

_images/front-2.png

verify health
curl -k http://{ip}:30100/health
{
    "instances": [
        {
            "instanceId": "e810f2f3baf711ec9486fa163e176e7b",
            "serviceId": "7062417bf9ebd4c646bb23059003cea42180894a",
            "endpoints": [
                "rest://[::]:30100/"
            ],
            "hostName": "etcd03",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1649833445",
            "dataCenterInfo": {
                "name": "dz1",
                "region": "rg1",
                "availableZone": "az1"
            },
            "modTimestamp": "1649833445",
            "version": "2.1.0"
        },
        {
            "instanceId": "e810f2f3baf711ec9486fa163e176e8b",
            "serviceId": "7062417bf9ebd4c646bb23059003cea42180896a",
            "endpoints": [
                "rest://[::]:30100/"
            ],
            "hostName": "etcd04",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1649833445",
            "dataCenterInfo": {
                "name": "dz2",
                "region": "rg2",
                "availableZone": "az2"
            },
            "modTimestamp": "1649833445",
            "version": "2.1.0"
        }
        ...
    ]
}
Congratulations!!!

Integrate with Grafana

As Service-Center uses Prometheus lib to report metrics. Then it is easy to integrate with Grafana. Here is a DEMO to deploy Service-Center with Grafana, and this is the template file can be imported in Grafana.

After the import, you can get the view like blow.

_images/integration-grafana.PNG

Note: As the template has an ASF header, please remove the header first if you import this template file.

PR raising Guide

Steps

If you want to raise a PR in this repo then you can follow the below guidelines to avoid conflicts.

  1. Make your changes in your local code.
  2. Once your changes are done the clone the code from ServiceComb
git clone http://github.com/apache/servicecomb-service-center.git
cd service-center
git remote add fork http://github.com/{YOURFORKNAME}/service-center.git
git checkout -b {YOURFEATURENAME}

#Merge your local changes in this branch.

#Once your changes are done then Push the changes to your fork

git add -A

git commit -m "{JIRA-ID YOURCOMMITMESSAGE}"

git push fork {YOURFEATURENAME}
  1. Now go to github and browse to your branch and raise a PR from that branch.

Design Guides

Service-Center Design

Service-Center(SC) is a service registry that allows services to register their instance information and to discover providers of a given service. Generally, SC uses etcd to store all the information of micro-service and its instances.

_images/aggregator-design.PNG
  • API Layer: To expose the RESTful and gRPC service.
  • Metedata: The business logic to manage microservice, instance, schema, tag, dependency and ACL rules.
  • Server Core: Including data model, requests handle chain and so on.
  • Aggregator: It is the bridge between Core and Registry, includes the cache manager and indexer of registry.
  • Registry Adaptor: An abstract layer of registry, exposing a unified interface for upper layer calls.

Below is the diagram stating the working principles and flow of SC.

On StartUp

Here describes a standard client registration process. We assume that micro-services are written using java-chassis sdk or go-chassis sdk. So when micro-service boots up then java-chassis sdk does the following list of tasks.

  1. On startup provider registers the micro-service to SC if not registered earlier and also register its instance information like its Ip and Port on which instance is running.
  2. SC stores the provider information in etcd.
  3. On startup consumer retrieves the list of all provider instance from SC using the micro-service name of the provider.
  4. Consumer sdk stores all the information of provider instances in its cache.
  5. Consumer sdk creates a web socket connection to SC to watch all the provider instance information, if there is any change in the provider then sdk updates it’s cache information.
_images/onStartup.PNG

Communication

Once the bootup is successful then the consumer can communicate with providers flawlessly, below is the diagram illustrating the communication between provider and consumer.

_images/communication.PNG
Provider instance regularly sends heartbeat signal every 30 seconds to SC, if SC does not receive the heartbeat for particular instance then the information in etcd expires and the provider instance information is removed.
Consumer watches the information of provider instances from SC and if there is any change then the cache is updated.
When Consumer needs to communicate to Provider then consumer reads endpoints of the provider instances from cache and do loadbalancing to communicate to Provider.

Note: Feel free to contribute to this document.

Storage structure

Backend kind is ETCD

# services
# /cse-sr/ms/files/{domain}/{project}/{serviceId}
/cse-sr/ms/files/default/default/7062417bf9ebd4c646bb23059003cea42180894a:
  {
    "serviceId": "7062417bf9ebd4c646bb23059003cea42180894a",
    "appId": "default",
    "serviceName": "SERVICECENTER",
    "description": "A default service",
    "version": "0.0.1",
    "level": "BACK",
    "schemas": [
      "firstSchema",
      "secondSchema"
    ],
    "paths": [{
                "path": "/opt/tomcat/webapp",
                "property": {
                  "allowCrossApp": "true"
                }
              }],
    "status": "UP",
    "properties": {
      "allowCrossApp": "true"
    },
    "timestamp": "1592570701",
    "framework": {
      "name": "UNKNOWN",
      "version": "0.0.1"
    },
    "alias": "SERVICECENTER",
    "modTimestamp": "1592570701",
    "environment": "development"
  }

# /cse-sr/ms/indexes/{domain}/{project}/{environment}/{appId}/{serviceName}/{serviceVersion}
/cse-sr/ms/indexes/default/default/development/default/SERVICECENTER/0.0.1:
  "7062417bf9ebd4c646bb23059003cea42180894a"

# /cse-sr/ms/alias/{domain}/{project}/{environment}/{appId}/{serviceName}/{serviceVersion}
/cse-sr/ms/alias/default/default/development/default/SERVICECENTER/0.0.1:
  "7062417bf9ebd4c646bb23059003cea42180894a"

# instances
# /cse-sr/inst/files/{domain}/{project}/{serviceId}/{instanceId}
/cse-sr/inst/files/default/default/7062417bf9ebd4c646bb23059003cea42180894a/b0ffb9feb22a11eaa76a08002706c83e:
  {
    "instanceId": "b0ffb9feb22a11eaa76a08002706c83e",
    "serviceId": "7062417bf9ebd4c646bb23059003cea42180894a",
    "endpoints": ["rest://127.0.0.1:30100/"],
    "hostName": "tian-VirtualBox",
    "status": "UP",
    "healthCheck": {
      "mode": "push",
      "interval": 30,
      "times": 3
    },
    "timestamp": "1592570701",
    "modTimestamp": "1592570701",
    "version": "0.0.1"
  }

# /cse-sr/inst/leases/{domain}/{project}/{serviceId}/{instanceId}
/cse-sr/inst/leases/default/default/7062417bf9ebd4c646bb23059003cea42180894a/b0ffb9feb22a11eaa76a08002706c83e:
  "leaseId"

# schemas
# /cse-sr/ms/schemas/{domain}/{project}/{serviceId}/{schemaId}
/cse-sr/ms/schemas/default/default/7062417bf9ebd4c646bb23059003cea42180894a/first-schema:
  "schema"

# /cse-sr/ms/schema-sum/{domain}/{project}/{serviceId}/{schemaId}
/cse-sr/ms/schema-sum/default/default/7062417bf9ebd4c646bb23059003cea42180894a/first-schema:
  "schemaSummary"

# dependencies
# /cse-sr/ms/dep-queue/{domain}/{project}/{serviceId}/{uuid}
/cse-sr/ms/dep-queue/default/default/7062417bf9ebd4c646bb23059003cea42180894a/0:
  {
    "consumer": {
      "tenant": "default/default",
      "project": "project",
      "appId": "appId",
      "serviceName": "ServiceCenter",
      "version": "0.0.1",
      "environment": "development",
      "alias": "serviceCenter"
    },
    "providers": [{
                   "tenant": "default/default",
                   "project": "project",
                   "appId": "appId",
                   "serviceName": "ServiceCenterProvider",
                   "version": "0.0.2",
                   "environment": "development",
                   "alias": "serviceCenterProvider"
                 }],
    "override": true
  }

# tags
# /cse-sr/ms/tags/{domain}/{project}/{serviceId}
/cse-sr/ms/tags/default/default/7062417bf9ebd4c646bb23059003cea42180894a:
  {
    "a": "1"
  }

# rules
# /cse-sr/ms/rules/{domain}/{project}/{serviceId}/{ruleId}
/cse-sr/ms/rules/default/default/7062417bf9ebd4c646bb23059003cea42180894a/Deny:
  {
    "ruleId": "Deny",
    "attribute": "denylist",
    "pattern": "Test*",
    "description": "test BLACK"
  }

# /cse-sr/ms/rule-indexes/{domain}/{project}/{serviceId}/{attribute}/{pattern}
/cse-sr/ms/rule-indexes/default/default/7062417bf9ebd4c646bb23059003cea42180894a/denylist/Test:
  "ruleId"

# auth
# /cse-sr/accounts/{accountName}
/cse-sr/accounts/Alice:
  {
    "_id": "xxx",
    "account": "account_name",
    "password": "password",
    "role": "admin",
    "tokenExpirationTime": "1500519927",
    "currentPassword": "password",
    "status": "normal"
  }
# record role binding to account
/cse-sr/idx-role-account/{role}/{account}:
  {no value}
# domain
# /cse-sr/domains/{domain}
/cse-sr/domains/default:

# project
# /cse-sr/domains/{domain}/{project}
/cse-sr/projects/default/default:

Backend kind is Mongo

#type Service struct {
#  Domain  string            `json:"domain,omitempty"`
#  Project string            `json:"project,omitempty"`
#  Tags    map[string]string `json:"tags,omitempty"`
#  Service *pb.MicroService  `json:"service,omitempty"`
#}

#type MicroService struct {
#  ServiceId    string             `protobuf:"bytes,1,opt,name=serviceId" json:"serviceId,omitempty" bson:"service_id"`
#  AppId        string             `protobuf:"bytes,2,opt,name=appId" json:"appId,omitempty" bson:"app"`
#  ServiceName  string             `protobuf:"bytes,3,opt,name=serviceName" json:"serviceName,omitempty" bson:"service_name"`
#  Version      string             `protobuf:"bytes,4,opt,name=version" json:"version,omitempty"`
#  Description  string             `protobuf:"bytes,5,opt,name=description" json:"description,omitempty"`
#  Level        string             `protobuf:"bytes,6,opt,name=level" json:"level,omitempty"`
#  Schemas      []string           `protobuf:"bytes,7,rep,name=schemas" json:"schemas,omitempty"`
#  Paths        []*ServicePath     `protobuf:"bytes,10,rep,name=paths" json:"paths,omitempty"`
#  Status       string             `protobuf:"bytes,8,opt,name=status" json:"status,omitempty"`
#  Properties   map[string]string  `protobuf:"bytes,9,rep,name=properties" json:"properties,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
#  Timestamp    string             `protobuf:"bytes,11,opt,name=timestamp" json:"timestamp,omitempty"`
#  Providers    []*MicroServiceKey `protobuf:"bytes,12,rep,name=providers" json:"providers,omitempty"`
#  Alias        string             `protobuf:"bytes,13,opt,name=alias" json:"alias,omitempty"`
#  LBStrategy   map[string]string  `protobuf:"bytes,14,rep,name=LBStrategy" json:"LBStrategy,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value" bson:"lb_strategy"`
#  ModTimestamp string             `protobuf:"bytes,15,opt,name=modTimestamp" json:"modTimestamp,omitempty" bson:"mod_timestamp"`
#  Environment  string             `protobuf:"bytes,16,opt,name=environment" json:"environment,omitempty" bson:"env"`
#  RegisterBy   string             `protobuf:"bytes,17,opt,name=registerBy" json:"registerBy,omitempty" bson:"register_by"`
#  Framework    *FrameWork `protobuf:"bytes,18,opt,name=framework" json:"framework,omitempty"`
#}

#collection: service
{
  "_id" : ObjectId("6021fb9527d99d766f82e44f"),
  "domain" : "new_default",
  "project" : "new_default",
  "tags" : null,
  "service" : {
    "service_id" : "6ea4d1c36a8311eba78dfa163e176e7b",
    "app" : "dep_create_dep_group",
    "service_name" : "dep_create_dep_consumer",
    "version" : "1.0.0",
    "description" : "",
    "level" : "FRONT",
    "schemas" : null,
    "paths" : null,
    "status" : "UP",
    "properties" : null,
    "timestamp" : "1612839829",
    "providers" : null,
    "alias" : "",
    "lb_strategy" : null,
    "mod_timestamp" : "1612839829",
    "env" : "",
    "register_by" : "",
    "framework" : null
  }
}

#type Instance struct {
#  Domain      string                   `json:"domain,omitempty"`
#  Project     string                   `json:"project,omitempty"`
#  RefreshTime time.Time                `json:"refreshTime,omitempty" bson:"refresh_time"`
#  Instance    *pb.MicroServiceInstance `json:"instance,omitempty"`
#}

#type MicroServiceInstance struct {
#  InstanceId     string            `protobuf:"bytes,1,opt,name=instanceId" json:"instanceId,omitempty" bson:"instance_id"`
#  ServiceId      string            `protobuf:"bytes,2,opt,name=serviceId" json:"serviceId,omitempty" bson:"service_id"`
#  Endpoints      []string          `protobuf:"bytes,3,rep,name=endpoints" json:"endpoints,omitempty"`
#  HostName       string            `protobuf:"bytes,4,opt,name=hostName" json:"hostName,omitempty"`
#  Status         string            `protobuf:"bytes,5,opt,name=status" json:"status,omitempty"`
#  Properties     map[string]string `protobuf:"bytes,6,rep,name=properties" json:"properties,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
#  HealthCheck    *HealthCheck      `protobuf:"bytes,7,opt,name=healthCheck" json:"healthCheck,omitempty" bson:"health_check"`
#  Timestamp      string            `protobuf:"bytes,8,opt,name=timestamp" json:"timestamp,omitempty"`
#  DataCenterInfo *DataCenterInfo   `protobuf:"bytes,9,opt,name=dataCenterInfo" json:"dataCenterInfo,omitempty" bson:"data_center_info"`
#  ModTimestamp   string            `protobuf:"bytes,10,opt,name=modTimestamp" json:"modTimestamp,omitempty" bson:"mod_timestamp"`
#  Version        string            `protobuf:"bytes,11,opt,name=version" json:"version,omitempty"`
#}

# collection: instance
{
  "_id" : ObjectId("60222c6f4fe067987f40803e"),
  "domain" : "default",
  "project" : "default",
  "refresh_time" : ISODate("2021-02-09T06:32:15.562Z"),
  "instance" : {
    "instance_id" : "8cde54a46aa011ebab42fa163e176e7b",
    "service_id" : "8cddc7ce6aa011ebab40fa163e176e7b",
    "endpoints" : [
        "find:127.0.0.9:8080"
    ],
    "hostname" : "UT-HOST-MS",
    "status" : "UP",
    "properties" : null,
    "health_check" : {
      "mode" : "push",
      "port" : 0,
      "interval" : 30,
      "times" : 3,
      "url" : ""
    },
    "timestamp" : "1612852335",
    "data_center_info" : null,
    "mod_timestamp" : "1612852335",
    "version" : "1.0.0"
  }
}

#type Schema struct {
#  Domain        string `json:"domain,omitempty"`
#  Project       string `json:"project,omitempty"`
#  ServiceId     string `json:"serviceId,omitempty" bson:"service_id"`
#  SchemaId      string `json:"schemaId,omitempty" bson:"schema_id"`
#  Schema        string `json:"schema,omitempty"`
#  SchemaSummary string `json:"schemaSummary,omitempty" bson:"schema_summary"`
#}

# collection schema
{
  "_id" : ObjectId("6021fb9827d99d766f82e4f7"),
  "domain" : "default",
  "project" : "default",
  "service_id" : "70302da16a8311eba7cbfa163e176e7b",
  "schema_id" : "ServiceCombTestTheLimitOfSchemasServiceMS19",
  "schema" : "ServiceCombTestTheLimitOfSchemasServiceMS19",
  "schema_summary" : "ServiceCombTestTheLimitOfSchemasServiceMS19"
}

#type Rule struct {
#  Domain    string          `json:"domain,omitempty"`
#  Project   string          `json:"project,omitempty"`
#  ServiceId string          `json:"serviceId,omitempty" bson:"service_id"`
#  Rule      *pb.ServiceRule `json:"rule,omitempty"`
#}

#type ServiceRule struct {
#  RuleId       string `protobuf:"bytes,1,opt,name=ruleId" json:"ruleId,omitempty" bson:"rule_id"`
#  RuleType     string `protobuf:"bytes,2,opt,name=ruleType" json:"ruleType,omitempty" bson:"rule_type"`
#  Attribute    string `protobuf:"bytes,3,opt,name=attribute" json:"attribute,omitempty"`
#  Pattern      string `protobuf:"bytes,4,opt,name=pattern" json:"pattern,omitempty"`
#  Description  string `protobuf:"bytes,5,opt,name=description" json:"description,omitempty"`
#  Timestamp    string `protobuf:"bytes,6,opt,name=timestamp" json:"timestamp,omitempty"`
#  ModTimestamp string `protobuf:"bytes,7,opt,name=modTimestamp" json:"modTimestamp,omitempty" bson:"mod_timestamp"`
#}
# collection rules
{
  "_id" : ObjectId("6021fb9727d99d766f82e48a"),
  "domain" : "default",
  "project" : "default",
  "service_id" : "7026973b6a8311eba792fa163e176e7b",
  "rule" : {
    "rule_id" : "702897cf6a8311eba79dfa163e176e7b",
    "rule_type" : "BLACK",
    "attribute" : "ServiceName",
    "pattern" : "18",
    "description" : "test white",
    "timestamp" : "1612839831",
    "mod_timestamp" : "1612839831"
  }
}

#type ConsumerDep struct {
#  Domain      string                 `json:"domain,omitempty"`
#  Project     string                 `json:"project,omitempty"`
#  ConsumerId  string                 `json:"consumerId,omitempty" bson:"consumer_id"`
#  UUId        string                 `json:"uuId,omitempty" bson:"uu_id"`
#  ConsumerDep *pb.ConsumerDependency `json:"consumerDep,omitempty" bson:"consumer_dep"`
#}

#type ConsumerDependency struct {
#  Consumer  *MicroServiceKey   `protobuf:"bytes,1,opt,name=consumer" json:"consumer,omitempty"`
#  Providers []*MicroServiceKey `protobuf:"bytes,2,rep,name=providers" json:"providers,omitempty"`
#  Override  bool               `protobuf:"varint,3,opt,name=override" json:"override,omitempty"`
#}

#type MicroServiceKey struct {
#  Tenant      string `protobuf:"bytes,1,opt,name=tenant" json:"tenant,omitempty"`
#  Environment string `protobuf:"bytes,2,opt,name=environment" json:"environment,omitempty" bson:"env"`
#  AppId       string `protobuf:"bytes,3,opt,name=appId" json:"appId,omitempty" bson:"app"`
#  ServiceName string `protobuf:"bytes,4,opt,name=serviceName" json:"serviceName,omitempty" bson:"service_name"`
#  Alias       string `protobuf:"bytes,5,opt,name=alias" json:"alias,omitempty"`
#  Version     string `protobuf:"bytes,6,opt,name=version" json:"version,omitempty"`
#}

# collection dependencies
{
  "_id" : ObjectId("6021fb9527d99d766f82e45f"),
  "domain" : "new_default",
  "project" : "new_default",
  "consumer_id" : "6ea4d1c36a8311eba78dfa163e176e7b",
  "uu_id" : "6eaeb1dd6a8311eba790fa163e176e7b",
  "consumer_dep" : {
    "consumer" : {
      "tenant" : "new_default/new_default",
      "env" : "",
      "app" : "dep_create_dep_group",
      "service_name" : "dep_create_dep_consumer",
      "alias" : "",
      "version" : "1.0.0"
    },
    "providers" : null,
    "override" : false
  }
}

#type DependencyRule struct {
#  Type       string                     `json:"type,omitempty"`
#  Domain     string                     `json:"domain,omitempty"`
#  Project    string                     `json:"project,omitempty"`
#  ServiceKey *pb.MicroServiceKey        `json:"serviceKey,omitempty" bson:"service_key"`
#  Dep        *pb.MicroServiceDependency `json:"dep,omitempty"`
#}

#type MicroServiceKey struct {
#  Tenant      string `protobuf:"bytes,1,opt,name=tenant" json:"tenant,omitempty"`
#  Environment string `protobuf:"bytes,2,opt,name=environment" json:"environment,omitempty" bson:"env"`
#  AppId       string `protobuf:"bytes,3,opt,name=appId" json:"appId,omitempty" bson:"app"`
#  ServiceName string `protobuf:"bytes,4,opt,name=serviceName" json:"serviceName,omitempty" bson:"service_name"`
#  Alias       string `protobuf:"bytes,5,opt,name=alias" json:"alias,omitempty"`
#  Version     string `protobuf:"bytes,6,opt,name=version" json:"version,omitempty"`
#}

#type MicroServiceDependency struct {
#  Dependency []*MicroServiceKey `json:"Dependency,omitempty"`
#}

# collection dependencies
{
  "_id" : ObjectId("6022302751a77062a95dd0da"),
  "service_key" : {
    "app" : "create_dep_group",
    "env" : "production",
    "service_name" : "create_dep_consumer",
    "tenant" : "default/default",
    "version" : "1.0.0"
  },
  "type" : "c",
  "dep" : {
    "dependency" : [
      {
        "tenant" : "default/default",
        "env" : "",
        "app" : "service_group_provider",
        "service_name" : "service_name_provider",
        "alias" : "",
        "version" : "latest"
      }
    ]
  }
}


#type Account struct {
#  ID                  string   `json:"id,omitempty"`
#  Name                string   `json:"name,omitempty"`
#  Password            string   `json:"password,omitempty"`
#  Roles               []string `json:"roles,omitempty"`
#  TokenExpirationTime string   `json:"tokenExpirationTime,omitempty" bson:"token_expiration_time"`
#  CurrentPassword     string   `json:"currentPassword,omitempty" bson:"current_password"`
#  Status              string   `json:"status,omitempty"`
#}

# collection account
{
  "_id" : ObjectId("60223e99184f264aee398238"),
  "id" : "6038bf9f6aab11ebbcdefa163e176e7b",
  "name" : "test-account1",
  "password" : "$2a$14$eYyD9DiOA1vGXOyhPTjbhO6CYuGnOVt8VQ8V/sWEmExyvwOQeNI2i",
  "roles" : [
      "admin"
  ],
  "token_expiration_time" : "2020-12-30",
  "current_password" : "tnuocca-tset1",
  "status" : ""
}

Plug-in mechanism

Required

  1. Go version 1.8(+)
  2. Compile service-center with GO_EXTLINK_ENABLED=1 and CGO_ENABLED=1
  3. The plugin file name must has suffix ‘_plugin.so’
  4. All plugin interface files are in plugin package

Plug-in names

  1. auth: Customize authentication of service-center.
  2. uuid: Customize micro-service/instance id format.
  3. auditlog: Customize audit log for any change done to the service-center.
  4. cipher: Customize encryption and decryption of TLS certificate private key password.
  5. quota: Customize quota for instance registry.
  6. tracing: Customize tracing data reporter.
  7. tls: Customize loading the tls certificates in server

Example: an authentication plug-in

Step 1: code auth.go

auth.go is the implement from auth interface

package main

import (
    "fmt"
    "net/http"
)

func Identify(*http.Request) error {
	// do something
	return nil
}
Step 2: compile auth.go
GOPATH=$(pwd) go build -o auth_plugin.so -buildmode=plugin auth.go
Step 3: move the plug-in in plugins directory
mkdir ${service-center}/plugins
mv auth_plugin.so ${service-center}/plugins
Step 4: run service-center
cd ${service-center}
./servicecenter

Development Guides

Development with Service-Center

This chapter is about how to implement the feature of micro-service discovery with ServiceCenter, and you can get more detail at here

Micro-service registration

curl -X POST \
  http://127.0.0.1:30100/v4/default/registry/microservices \
  -H 'content-type: application/json' \
  -H 'x-domain-name: default' \
  -d '{
	"service":
	{
		"appId": "default",
		"serviceName": "DemoService",
		"version":"1.0.0"
	}
}'

and then you can get the ‘DemoService’ ID like below:

{
    "serviceId": "a3fae679211211e8a831286ed488fc1b"
}

Instance registration

mark down the micro-service ID and call the instance registration API, according to the ServiceCenter definition: One process should be registered as one instance

curl -X POST \
  http://127.0.0.1:30100/v4/default/registry/microservices/a3fae679211211e8a831286ed488fc1b/instances \
  -H 'content-type: application/json' \
  -H 'x-domain-name: default' \
  -d '{
	"instance": 
	{
	    "hostName":"demo-pc",
	    "endpoints": [
		    "rest://127.0.0.1:8080"
	    ]
	}
}'

the successful response like below:

{
    "instanceId": "288ad703211311e8a831286ed488fc1b"
}

if all are successful, it means you have completed the micro-service registration and instance publish

Discovery

the next step is that discovery the micro-service instance by service name and version rule

curl -X GET \
  'http://127.0.0.1:30100/v4/default/registry/instances?appId=default&serviceName=DemoService&version=latest' \
  -H 'content-type: application/json' \
  -H 'x-consumerid: a3fae679211211e8a831286ed488fc1b' \
  -H 'x-domain-name: default'

here, you can get the information from the response

{
    "instances": [
        {
            "instanceId": "b4c9e57f211311e8a831286ed488fc1b",
            "serviceId": "a3fae679211211e8a831286ed488fc1b",
            "version": "1.0.0",
            "hostName": "demo-pc",
            "endpoints": [
                "rest://127.0.0.1:8080"
            ],
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1520322915",
            "modTimestamp": "1520322915"
        }
    ]
}

Module mechanism

Service center(SC) support an extend modules mechanism that developers can new some features in SC easily.

Just 4 steps, you can add a module in service center

  1. Create a module(package) under the github.com/apache/servicecomb-service-center/server/resource package.
  2. Here you just need to implement the controller and service interfaces in your module.
  3. And register service to SC when the module initializes.
  4. Import the package in github.com/apache/servicecomb-service-center/server/bootstrap/bootstrap.go

Quit start for the RESTful module

Implement the RouteGroup interface.

package hello

import (
	"net/http"
    
	"github.com/apache/servicecomb-service-center/pkg/rest"
)

type HelloService struct {
}

func (s *HelloService) URLPatterns() []rest.Route {
	return []rest.Route{
		{
		    rest.HTTP_METHOD_GET, // Method is one of the following: GET,PUT,POST,DELETE
		    "/helloWorld", // Path contains a path pattern
		    s.SayHello, // rest callback function for the specified Method and Path
        },
	}
}

func (s *HelloService) SayHello(w http.ResponseWriter, r *http.Request) {
    // say Hi
}

Register the service in SC ROA framework when the module initializes.

package hello

import "github.com/apache/servicecomb-service-center/pkg/rest"

func init() {
    rest.RegisterServant(&HelloService{})
}

Modify bootstarp.go file to import your module.

// module
import _ "github.com/apache/servicecomb-service-center/server/resource/hello"

Extend plugins

The following takes the extended quota management plugin as an example.

Standard Plugins

  • buildin: standard quota management implement, read local quota configuration and limit the resource quotas.

How to extend

  1. Implement the interface Manager in server/plugin/quota/quota.go
type Manager interface {
	RemandQuotas(ctx context.Context, t ResourceType)
	GetQuota(ctx context.Context, t ResourceType) int64
	Usage(ctx context.Context, req *Request) (int64, error)
}
  1. Declare new instance func and register it to plugin manager
import "github.com/apache/servicecomb-service-center/pkg/plugin"

plugin.RegisterPlugin(plugin.Plugin{Kind: quota.QUOTA, Name: "your plugin name", New: NewPluginInstanceFunc})
  1. edit conf/app.yaml
quota:
  kind: ${your plugin name}

Multiple Datacenters

ServiceCenter Aggregate Architecture

Now, service center has supported multiple datacenters deployment. Its architecture likes below.

architecture

architecture

As shown in the figure, we deploy an SC(Service-Center) cluster independently under each DC(datacenter). Each SC cluster manages the micro-service instances under the DC under which it belongs, and the DCs are isolated from each other. Another implementation of the discovery plug-in, Service-Center Aggregate service, can access multiple SC instances and periodically pull up micro-service instance information so that if some micro-services can request aggregate, cross-DCs can be implemented using the same API as SC cluster.

If SC aggregate is not deployed globally, SC also supports another way to implement multiple DCs discovery, as shown below.

architecture

architecture

The difference between the two approaches is that global deployment aggregate can divert service discovery traffic, the whole architecture is more like a read-write separation architecture, and the SC of each DC manage microservice information independently, which reduces the complexity. So we recommend the first architecture.

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different DCs with following details.

Cluster Datacenter Address
sc-1 dc-1 10.12.0.1
sc-2 dc-2 10.12.0.2

you can follow this guide to install Service-Center in cluster mode. After that, we can deploy a Service-Center Aggregate service now.

Start Service-Center Aggregate

Edit the configuration of the ip/port on which SC aggregate will run, we assume you launch it at 10.12.0.3.

vi conf/app.conf
# Replace the below values
httpaddr = 10.12.0.3
discovery_plugin = servicecenter
registry_plugin = buildin
self_register = 0
manager_cluster = "sc-1=http://10.12.0.1:30100,sc-2=http://10.12.0.2:30100"

# Start the Service-center
./service-center

Note: Please don’t run start.sh as it will also start the etcd.

Confirm the service is OK

We recommend that you use scctl, and using cluster command which makes it very convenient to verify OK.

scctl --addr http://10.12.0.3:30100 get cluster
#   CLUSTER |        ENDPOINTS
# +---------+-------------------------+
#   sc-1    | http://10.12.0.1:30100
#   sc-2    | http://10.12.0.2:30100

Example

Here we show a golang example of multiple datacenters access, where we use an example of the go-chassis project, assuming that below.

Microservice Datacenter Address
Client dc-1 10.12.0.4
Server dc-2 10.12.0.5

Notes: go-chassis application can run perfectly in the above 2 architectures. If you are using java-chassis, there are only support the service center with the second architecture at the moment. You can ref to here for more details of the second architecture.

Start Server

Edit the configuration of the ip/port on which Server will register.

vi examples/discovery/server/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      type: servicecenter
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

go run examples/discovery/server/main.go
Confirm the multiple datacenters discovery is OK

Since client is not a service, we check its running log.

2018-09-29 10:30:25.556 +08:00 INFO registry/bootstrap.go:69 Register [Client] success
...
2018-09-29 10:30:25.566 +08:00 WARN servicecenter/servicecenter.go:324 55c783c5c38e11e8951f0a58ac00011d Get instances from remote, key: default Server
2018-09-29 10:30:25.566 +08:00 INFO client/client_manager.go:86 Create client for highway:Server:127.0.0.1:8082
...
2018/09/29 10:30:25 AddEmploy ------------------------------ employList:<name:"One" phone:"15989351111" >

Using Java chassis for cross data center access

Now that you’ve seen two multiple data center architectures of the Service Center, we’ll show you how to implement micro-service cross data center access with the java-chassis framework.

architecture

architecture

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different DCs with following details.

Cluster Datacenter Address
sc-1 dc-1 10.12.0.1
sc-2 dc-2 10.12.0.2
Start Service-Center

Edit the configuration of the ip/port on which SC will run in dc-1. And here we assume your etcd is running on http://127.0.0.1:2379 (you can follow this guide to install etcd in cluster mode.)

vi conf/app.conf
# Replace the below values
httpaddr = 10.12.0.1
discovery_plugin = aggregate
aggregate_mode = "etcd,servicecenter"
manager_name = "sc-1"
manager_addr = "http://127.0.0.1:2379"
manager_cluster = "sc-1=http://10.12.0.1:30100,sc-2=http://10.12.0.2:30100"

# Start the Service-center
./service-center

Notes: + manager_name is the alias of the data center. manager_addr is the etcd cluster client urls. manager_cluster is the full Service Center clusters list. + To deploy Service Center in dc-2, you can repeat the above steps and just change the httpaddr value to 10.12.0.2.

Confirm the service is OK

We recommend that you use scctl, and using cluster command which makes it very convenient to verify OK.

scctl --addr http://10.12.0.3:30100 get cluster
#   CLUSTER |        ENDPOINTS
# +---------+-------------------------+
#   sc-1    | http://10.12.0.1:30100
#   sc-2    | http://10.12.0.2:30100

Example

Here we show a java example of multiple datacenters access, where we use an example, assuming that below.

Microservice Datacenter Address
Client dc-1 10.12.0.4
Server dc-2 10.12.0.5
Start springmvc-server

Edit the configuration of the ip/port on which springmvc-server will register.

vi src/main/resources/microservice.yaml

Replace the below values

cse:
  service:
    registry:
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

mvn clean install
java -jar target/springmvc-server-0.0.1-SNAPSHOT.jar
Start springmvc-client

Edit the configuration of the ip/port on which springmvc-client will register.

vi src/main/resources/microservice.yaml

Replace the below values

cse:
  service:
    registry:
      address: http://10.12.0.1:30100 # the address of SC in dc-1

Run the Client

mvn clean install
java -jar target/springmvc-client-0.0.1-SNAPSHOT.jar
Confirm the multiple datacenters discovery is OK

Since springmvc-client is not a service, we check its running log.

...
[2018-10-19 23:04:42,800/CST][main][INFO]............. test finished ............ org.apache.servicecomb.demo.TestMgr.summary(TestMgr.java:83)

Access Distinct Clusters

ServiceCenter Aggregate Architecture

In the Multiple Datacenters article, we introduce the aggregation architecture of service center. In fact, this aggregation architecture of service center can be applied not only to the scene deployed in multiple datacenters, but also to the scene services data aggregation in multiple kubernetes clusters.

architecture

architecture

The service centers deployed in distinct kubernetes clusters can communicate with each other, sync the services data from other kubernetes clusters. Applications can discover services from different the kubernetes cluster through using the service center HTTP API. It solve the problem of isolation between kubernetes clusters.

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different Kubernetes clusters with following details.

Cluster Kubernetes namespace Node
sc1 k1 default 10.12.0.1
sc2 k2 default 10.12.0.2

To facilitate deployment, we will publish the service address of the service center in [NodePort] mode.

Deploy the Service Center

Using helm to deploy the service center to kubernetes here, the instructions for specific values can be referred to here.

Take deployment to kubernetes cluster 1 as an example.

# login the k1 kubernetes master node to deploy sc1
git clone git@github.com:apache/servicecomb-service-center.git
cd examples/infrastructures/k8s
helm install --name k1 \
    --set sc.discovery.clusters="sc2=http://10.12.0.2:30100" \
    --set sc.discovery.aggregate="k8s\,servicecenter" \
    --set sc.registry.type="buildin" \
    --set sc.service.type=NodePort \
    service-center/

Notes: To deploy Service Center in kuberbetes cluster 2, you can repeat the above steps and just change the sc.discovery.clusters value to sc1=http://10.12.0.1:30100.

Start Server

Edit the configuration of the ip/port on which Server will register.

vi examples/discovery/server/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      type: servicecenter
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

go run examples/discovery/server/main.go
Start Client

Edit the configuration of the ip/port on which Client will register and discover.

vi examples/discovery/client/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      registrator:
        type: servicecenter
        address: http://10.12.0.1:30100 # the address of SC in dc-1
      serviceDiscovery:
        type: servicecenter
        address: http://10.12.0.3:30100 # the address of SC Aggregate

Run the Client

go run examples/discovery/client/main.go
Confirm the multiple datacenters discovery is OK

Since client is not a service, we check its running log.

2018-09-29 10:30:25.556 +08:00 INFO registry/bootstrap.go:69 Register [Client] success
...
2018-09-29 10:30:25.566 +08:00 WARN servicecenter/servicecenter.go:324 55c783c5c38e11e8951f0a58ac00011d Get instances from remote, key: default Server
2018-09-29 10:30:25.566 +08:00 INFO client/client_manager.go:86 Create client for highway:Server:127.0.0.1:8082
...
2018/09/29 10:30:25 AddEmploy ------------------------------ employList:<name:"One" phone:"15989351111" >

Integrate with Kubernetes

A simple demo to deploy ServiceCenter Cluster in Kubernetes. ServiceCenter supports two deploy modes: Platform Registration and Client Side Registration

Requirements

  1. There is a Kubernetes cluster.
  2. Already install kubectl and helm client in your local machine.
  3. (Optional) Already deploy helm tiller on Kubernetes.

Platform Registration

The platform registration indicates that the ServiceCenter automatically accesses kubernetes cluster, and micro-service instances can discover service and endpoints information through the ServiceCenter.

Notes: After deployment, it only create ServiceCenter cluster in the default namespace.

Use Kubectl

You can use the command kubectl apply to deploy ServiceCenter cluster.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
kubectl apply -f <(helm template --name servicecomb --namespace default service-center/)
Use Helm Install

You can also use the helm commands to deploy ServiceCenter cluster if you already deploy helm tiller.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
helm install --name servicecomb --namespace default service-center/

Client Side Registration

The client-side registration representational ServiceCenter receives and processes registration requests from micro-service instances and stores instance information in etcd.

Notes: After deployment, it create ServiceCenter cluster and etcd cluster in the default namespace.

Use Kubectl

You can use the command kubectl apply to deploy ServiceCenter cluster.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
# install etcd cluster
kubectl apply -f <(helm template --name coreos --namespace default etcd/)
# install sc cluster
kubectl apply -f <(helm template --name servicecomb --namespace default \
    --set sc.discovery.type="etcd" \
    --set sc.discovery.clusters="http://coreos-etcd-client:2379" \
    --set sc.registry.enabled=true \
    --set sc.registry.type="etcd" \
    service-center/)
Use Helm Install

You can also use the helm commands to deploy ServiceCenter cluster if you already deploy helm tiller.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
# install etcd cluster
helm install --name coreos --namespace default etcd/
# install sc cluster
helm install --name servicecomb --namespace default \
    --set sc.discovery.type="etcd" \
    --set sc.discovery.clusters="http://coreos-etcd-client:2379" \
    --set sc.registry.enabled=true \
    --set sc.registry.type="etcd" \
    service-center/

Confirm the deploy is ok

By default, the ServiceCenter frontend use NodePort service type to deploy in Kubernetes.

  1. You can execute the command kubectl get pod, to check all pods are running.
  2. You can also point your browser to http://${NODE}:30103 to view the dashboard of ServiceCenter.
  3. (Recommended) You can use scctl tool to list micro-service information.
# ./scctl get svc --addr http://servicecomb-service-center:30100 -owide
  DOMAIN  |                  NAME               |            APPID        | VERSIONS | ENV | FRAMEWORK  |        ENDPOINTS         | AGE  
+---------+-------------------------------------+-------------------------+----------+-----+------------+--------------------------+-----+
  default | servicecomb-service-center-frontend | service-center-frontend | 0.0.1    |     | Kubernetes | http://172.0.1.101:30103 | 2m   
  default | servicecomb-service-center          | service-center          | 0.0.1    |     | Kubernetes | http://172.0.1.102:30100 | 2m

Clean up

If you use the kubectl to deploy, take deploy mode platform registration as example.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
kubectl delete -f <(helm template --name servicecomb --namespace default service-center/)

If you use helm tiller to deploy, take deploy mode platform registration as example.

cd ${PROJECT_ROOT}/k8s
helm delete --purge servicecomb

Helm Configuration Values

  • Service Center (sc)
    • deployment (bool: true) Deploy this component or not.
    • service
      • type (string: “ClusterIP”) The kubernetes service type.
      • externalPort (int16: 30100) The external access port. If the type is ClusterIP, it is set to the access port of the kubernetes service, and if the type is NodePort, it is set to the listening port of the node.
    • discovery
      • type (string: “aggregate”) The Service Center discovery type. This can also be set to etcd or servicecenter. aggregate let Service Center merge the discovery sources and applications can discover microservices from these through using Service Center HTTP API. etcd let Service Center start with client registration mode, all the microservices information comes from application self registration. servicecenter let Service Center manage multiple Service Center clusters at the same time. It can be applied to multiple datacenters scenarios.
      • aggregate (string: “k8s,etcd”) The discovery sources of aggregation, only enabled if type is set to aggregate. Different discovery sources are merged together by commas(,), indicating that the Service Center will aggregate service information through these sources. Now support these scenarios: k8s,etcd(for managing services from multiple platforms), k8s,servicecenter(for accessing distinct kubernetes clusters).
      • clusters (string: “sc-0=http://127.0.0.1:2380”) The cluster address managed by Service Center. If type is set to etcd, its format is http(s)://{etcd-1},http(s)://{etcd-2}. If type is set to other value, its format is {cluster name 1}=http(s)://{cluster-1-1},http(s)://{cluster-1-2},{cluster-2}=http(s)://{cluster-2-1}
    • registry
      • enabled (bool: false) Register Service Center itself or not.
      • type (string: “embedded_etcd”) The class of backend storage provider, this decide how Service Center store the microservices information. embedded_etcd let Service Center store data in local file system, it means distributed file system is need if you deploy high availability Service Center. etcd let Service Center store data in existing etcd cluster, then Service Center could be a stateless service. builin disabled the storage.
      • name (string: “sc-0”) The Service Center cluster name, only enabled if type is set to embedded_etcd or etcd.
      • addr (string: “http://127.0.0.1:2380”) The backend storage provider address. This value should be a part of sc.discovery.clusters value.
  • UI (frontend)
    • deployment (bool: true) Deploy this component of not.
    • service
      • type (string: “NodePort”) The kubernetes service type.
      • externalPort (int16: 30103) The external access port. If the type is ClusterIP, it is set to the access port of the kubernetes service, and if the type is NodePort, it is set to the listening port of the node.

Integrate with Istio

This instructions will lead you to getting start with using Servicecomb-service-center-istio

_images/integration-istio.pngimage

1. Install dependencies

This tool can be used both inside a k8s cluster and a standalone service running on a VM.

For both ways you have to install dependencies first.

1.1 Install Kubernetes Cluster

You can follow K8S installation instruction to install a K8S cluster

1.2 Install Istio

Follow this instruction to install istio

note: the instruction is just a show case of how to install and use istio, if you want to use it in production, you have to use a production ready installation profile

1.3 Install Istio DNS

As any Servicecomb service center service will be translated to Serviceentry in K8S, while Kubernetes provides DNS resolution for Kubernetes Services out of the box, any custom ServiceEntrys will not be recognized. In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh

Use the following command to install istio DNS:

cat <<EOF | istioctl install -y -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  meshConfig:
    defaultConfig:
      proxyMetadata:
        # Enable basic DNS proxying
        ISTIO_META_DNS_CAPTURE: "true"
        # Enable automatic address allocation, optional
        ISTIO_META_DNS_AUTO_ALLOCATE: "true"
EOF
1.4 Install Servicecomb service center

Servicecomb service center could be installed in K8S or on VM. Install Servicecomb service center follow this instruction

2 Install Servicecomb-service-center-istio

2.1 Building

You don’t need to build from source to use Servicecomb-service-center-istio (binaries in apache nexus ), but if you want to try out the latest and greatest, Servicecomb-service-center-istio can be easily built.

go build -o servicecomb-service-center-istio cmd/main.go
2.2 Building docker image
docker build -t servicecomb-service-center-istio:dev .
2.2 Run on VM
./servicecomb-service-center-istio --sc-addr=?SERVICE_CENTER_ADDRESS --kube-config=?KUBE_CONFIG_FILE_PATH
2.3 Run in K8S
# make sure you modified the input args in the deployment.yaml file first, specify you service center address
kubectl apply -f manifest/deployment.yaml
2.4 Input parameters

_images/istio-cli.pngimage

3 Example

We will use consumer-provider example to show how to use this tool.

We have two services: Provider and Consumer:

  • provider is the provider of a service which calculates and returns the sum of the square root from 1 to a user provided parameter x.
  • consumer performs as both provider and consumer. As a consumer, it calls the api provided by provider to get the result of the sum of square roots; as a provider, it provides a service externally which returns the result it gets from provider to its clients.

While Provider uses servicecomb service center tech stack, Consumer uses istio tech stack. Origionaly, Provider and Consumer couldn’t discover each other.

In this demo, we are going to adopt our servicecomb-service-center-istio to brake the barrier between Provider and Consumer.

3.1 Build Provider and Consumer service images
3.1.1 Consumer
> docker build --target consumer -t consumer:dev .
3.1.2 Provider

Make sure you have already configed the registry related configuration (provider/conf/chassis.yaml)

> docker build --target provider -t provider:dev .
3.2 Deploy consumer and provider services
3.2.1 Consumer

Because Consumer is Istio-based service, so it has to be run in the Kubernetes. We have our deploument yaml file to deploy consumer into Kubernetes

> kubectl apply -f manifest/consumer.yaml
3.2.2 Provider

Provider service could be deployed either on a VM or Kubernetes cluster.

For VM

# go to provider folder and run
> ./provider

For Kubernetes

> kubectl apply -f manifest/provider.yaml
3.3 Testing

Now you can try to request consumer service and you can get the response which is actually return from provider service.

> curl http://${consumerip}:${consuemrport}/sqrt?x=1000
Get result from microservice provider: Sum of square root from 1 to 1000 is 21065.833111

Profiling

service center integrated pprof

Configuration

server:
  pprof:
    mode: 1

Run pprof

go tool pprof http://localhost:30100/debug/pprof/profile?seconds=30

Release Notes

Service-Center Release

How to publish release documents

Step 1

Confirm what this version mainly does

https://issues.apache.org/jira/projects/SCB/issues/SCB-2270?filter=allopenissues
Step 2

Collect major issues

Step 3

Write the releaseNotes-xx.xx.xx.md


Running Apache Rat tool

This guide will help you to run the Apache Rat tool on service-center source code. For running the tool please follow the below guidelines.

Step 1

Clone the Servcice-Center code and download Apache Rat tool.

git clone https://github.com/apache/servicecomb-service-center
wget http://mirrors.tuna.tsinghua.edu.cn/apache/creadur/apache-rat-0.13/apache-rat-0.13-bin.tar.gz

# Untar the release
tar -xvf apache-rat-0.13-bin.tar.gz

# Copy the jar in the root directory
cp  apache-rat-0.13/apache-rat-0.13.jar ./
Step 2

Run the Rat tool using the below command

java -jar apache-rat-0.13.jar -a -d servicecomb-service-center/ -e '(.+(\.svg|\.md|\.MD|\.cer|\.tpl|\.json|\.yaml|\.proto|\.pb.go))|(.gitignore|.gitmodules|ux|docs|vendor|licenses|bower.json|cert_pwd|glide.yaml|go.mod|go.sum)'

Below is the list of the files which has been excluded from the list of RAT tool.

  • *.md *.MD *.html: Skip all the Readme and Documentation file like Api Docs.
  • .gitignore .gitmodules .travis.yml : Skip the git files and travis file.
  • manifest **vendor : Skip manifest and all the files under vendor.
  • bower.json : Skip bower installation file
  • cert_pwd server.cer trust.cer : Skip ssl files
  • *.tpl : Ignore template files
  • glide.yaml go.mod go.sum : Skip dependency config files
  • docs : Skip document files
  • .yaml : Skip configuration files
  • ux : Skip foreground files
  • .proto .pb.go : Skip proto files

You can access the latest RAT report here


Make a release

See here


Archive

Step 1
If you are doing release for the first time, you can read this document.

Execute script, archive source code and generate summary and signature

bash scripts/release/archive.sh apache-servicecomb-service-center 2.0.0 littlecui@apache.org

list current directory

-rw-rw-r--  1 ubuntu ubuntu 3.1M Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz
-rw-rw-r--  1 ubuntu ubuntu  862 Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz.asc
-rw-rw-r--  1 ubuntu ubuntu  181 Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz.sha512
Step 2

PUSH to apache dev repo

svn co https://dist.apache.org/repos/dist/dev/servicecomb/
cd servicecomb/
mkdir -p 2.0.0
cp apache-servicecomb-service-center-* 2.0.0/
svn add .
svn ci --username xxx --password xxx -m "Add the Service-Center 2.0.0 version"

Add tag

Step 1

Push new tag to repo

git clone https://github.com/apache/servicecomb-service-center.git

git tag vx.x.x

git push origin vx.x.x
Step 2

Edit the tag to make x.x.x version release

published content should use releaseNotes-vx.x.x.md
Step 3

Initiate version voting —— send email to dev@servicecomb.apache.org

mail format : use plain text

mail subject : [VOTE] Release Apache ServiceComb Service-Center version 2.1.0

mail content :

Hi all,

Please review and vote on Apache ServiceCenter 2.1.0 release.

The release candidate has been tagged in GitHub as 2.1.0, available
here:
https://github.com/apache/servicecomb-service-center/releases/tag/v2.1.0

Release Notes are here:
https://github.com/apache/servicecomb-service-center/blob/v2.1.0/docs/release/releaseNotes-2.1.0.md

Thanks to everyone who has contributed to this release.

The artifacts (source, signature and checksum) corresponding to this release
candidate can be found here:
https://dist.apache.org/repos/dist/dev/servicecomb/servicecomb-service-center/2.1.0/

This has been signed with PGP key, public KEYS file is available here:
https://dist.apache.org/repos/dist/dev/servicecomb/KEYS

To verify and build, you can refer to following wiki:
https://github.com/apache/servicecomb-service-center#building--running-service-center-from-source

The vote will be open for at least 72 hours.
[ ] +1 Approve the release
[ ] +0 No opinion
[ ] -1 Do not release this package because ...

Best Regards,
robotljw
Step 4

After the vote is passed, upload the release package of the relevant version

1.Edit the v.x.x.x release

2.Attach binaries by dropping them here or selecting them

apache-servicecomb-service-center-x.x.x-darwin-amd64.tar.gz

apache-servicecomb-service-center-x.x.x-linux-amd64.tar.gz

apache-servicecomb-service-center-x.x.x-windows-amd64.tar.gz