Welcome to servicecomb-service-center’s documentation!

Introductions

Introduction

What is ServiceComb Service Center

Apache ServiceComb Service-Center is a Restful based service-registry that provides micro-services discovery and micro-service management. It is based on Open API format and provides features like service-discovery, fault-tolerance, dynamic routing, notify subscription and scalable by design. It has high performance cache design and separate entity management for micro-services and their instances. It provides out of box support for metrics and tracing. It has a web portal to manage the micro-services.

Why use ServiceComb Service Center

ServiceCenter is a service registry. Like other service registry, its main role is to solve the problem of service registration and discovery, that is the problem of dynamic routing. At the same time, in order to better solve the problem of cross-team collaboration, it adds support for contract (based on OpenAPI specifications) services. If it is used with contract tools (Toolkit) or Java microservice development kit (Java Chassis), communication Interfaces will become transparent, allowing users to focus on business development.

Service Center Commands

scctl

scctl enables user to view the list of MicroServices registered in service center(version 1.1.0+). You can view all the commands from here

QuickStart Guide
Install

Easiest way to get started with scctl is to download the release from here and then untar/unzip it based on your OS.

Check the version

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

scctl.exe version

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./scctl version

Note: If you already bootstrap SC and listen on 127.0.0.1:30100, this command will also print the SC version.

Running scctl from source code

Requirements

  • Go version 1.8+ is required to build the latest version of scctl.

However if you want to try our latest code then you can follow the below steps

#Make sure your GOPATH is set correctly and download all the vendors of SC
git clone https://github.com/apache/servicecomb-service-center.git $GOPATH/src/github.com/apache/servicecomb-service-center
cd $GOPATH/src/github.com/apache/servicecomb-service-center

cd scctl

go build

Windows:

scctl.exe version

Linux:

./scctl version

Get started

Quick Start

Getting Service Center

The easiest way to get Service Center is to use one of the pre-built release binaries which are available for Linux, Windows and Docker.

Running Service Center using the Release

You can download our latest release from ServiceComb Website.When you get these release, you can execute the start script to run Service Center.

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

start-service-center.bat

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./start-service-center.sh

Docker:

docker pull servicecomb/service-center
docker run -d -p 30100:30100 servicecomb/service-center

Note: The Releases of Service-Center uses emebeded etcd, if you want to use the seperate instance of etcd then you can deploy the etcd seperately and configure the etcd ip over here.

vi conf/app.conf

## Edit this file
# registry address
# 1. if registry_plugin equals to 'embedded_etcd'
# manager_name = "sc-0"
# manager_addr = "http://127.0.0.1:2380"
# manager_cluster = "sc-0=http://127.0.0.1:2380"
# 2. if registry_plugin equals to 'etcd'
# manager_cluster = "127.0.0.1:2379"
manager_cluster = "127.0.0.1:2379"

By default the SC comes up on 127.0.0.1:30100, however you can change the configuration of these address over here.

vi conf/app.conf

httpaddr = 127.0.0.1
httpport = 30100

Building & Running Service-Center from source

Requirements

  • Go version 1.8+ is required to build the latest version of Service-Center.

Download the Code

git clone https://github.com/apache/servicecomb-service-center.git $GOPATH/src/github.com/apache/servicecomb-service-center
cd $GOPATH/src/github.com/apache/servicecomb-service-center

Dependencies

you can download dependencies directly using command go mod. Please follow below steps to download all the dependency.

# greater than go1.11
GO111MODULE=on go mod download
GO111MODULE=on go mod vendor

Build the Service-Center

go build -o service-center

First, you need to run a etcd(version: 3.x) as a database service and then modify the etcd IP and port in the Service Center configuration file (./etc/conf/app.conf : manager_cluster).

wget https://github.com/coreos/etcd/releases/download/v3.1.8/etcd-v3.1.8-linux-amd64.tar.gz
tar -xvf etcd-v3.1.8-linux-amd64.tar.gz
cd etcd-v3.1.8-linux-amd64
./etcd

cd $GOPATH/src/github.com/apache/servicecomb-service-center
cp -r ./etc/conf .
./service-center

This will bring up Service Center listening on ip/port 127.0.0.1:30100 for service communication.If you want to change the listening ip/port, you can modify it in the Service Center configuration file (./conf/app.conf : httpaddr,httpport).

Running Frontend using the Release

You can download our latest release from ServiceComb Website and then untar it and run start-frontend.sh/start-frontend.bat. This will bring up the Service-Center UI on http://127.0.0.1:30103.

Windows(apache-servicecomb-service-center-XXX-windows-amd64.zip):

start-frontend.bat

Linux(apache-servicecomb-service-center-XXXX-linux-amd64.tar.gz):

./start-frontend.sh

Note: By default frontend runs on 127.0.0.1, if you want to change this then you can change it in conf/app.conf.

frontend_host_ip=127.0.0.1
frontend_host_port=30103

You can follow the guide over here to run the Frontend from source.

User Guides

PR raising Guide

Steps

If you want to raise a PR in this repo then you can follow the below guidelines to avoid conflicts.

  1. Make your changes in your local code.
  2. Once your changes are done the clone the code from ServiceComb
git clone http://github.com/apache/servicecomb-service-center.git
cd service-center
git remote add fork http://github.com/{YOURFORKNAME}/service-center.git
git checkout -b {YOURFEATURENAME}

#Merge your local changes in this branch.

#Once your changes are done then Push the changes to your fork

git add -A

git commit -m "{JIRA-ID YOURCOMMITMESSAGE}"

git push fork {YOURFEATURENAME}
  1. Now go to github and browse to your branch and raise a PR from that branch.

Setup SSL/TLS

Requirement

Service center(SC) takes several files related SSL/TLS options.

  1. Environment variable ‘SSL_ROOT’: The directory contains certificates. If not set, uses ‘etc/ssl’ under the SC work directory.
  2. $SSL_ROOT/trust.cer: Trusted certificate authority.
  3. $SSL_ROOT/server.cer: Certificate used for SSL/TLS connections to SC.
  4. $SSL_ROOT/server_key.pem: Key for the certificate. If key is encrypted, ‘cert_pwd’ must be set.
  5. $SSL_ROOT/cert_pwd(optional): The password used to decrypt the private key.

Configuration

Please modify the conf/app.conf before start up SC

  1. ssl_mode: Enabled SSL/TLS mode. [0, 1]
  2. ssl_verify_client: Whether the SC verify client(including etcd server). [0, 1]
  3. ssl_min_version: Minimal SSL/TLS protocol version. [”TLSv1.0”, “TLSv1.1”, “TLSv1.2”, “TLSv1.3”], based on Go version
  4. ssl_ciphers: A list of cipher suite. By default, uses TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256

Data Source

Service-Center support multiple DB configurations. Configure app.yaml according to your needs.

registry:
  # buildin, etcd, embedded_etcd, mongo
  kind: etcd
  # registry cache, if this option value set 0, service center can run
  # in lower memory but no longer push the events to client.
  cache:
    mode: 1
    # the cache will be clear after X, if not set cache will be never clear
    ttl:
  # enabled if registry.kind equal to etcd or embedded_etcd
field description required value
registry.kind database type (etcd or mongo) yes etcd / embedded_etcd /mongo
registry.cache.mode open cache (1 is on, 0 is off) yes 1 / 0
registry.cache.ttl cache timeout (if not set cache will be never clear) no an integer time, like 30s/20m/10h

Etcd

Download the etcd according to your own environment. Etcd Installation package address.

Configure app.yaml according to your needs.

etcd:
  # the interval of etcd health check, aggregation conflict check and sync loop
  autoSyncInterval: 30s
  compact:
    # indicate how many revision you want to keep in etcd
    indexDelta: 100
    interval: 12h
  cluster:
    # if registry_plugin equals to 'embedded_etcd', then
    # name: sc-0
    # managerEndpoints: http://127.0.0.1:2380"
    # endpoints: sc-0=http://127.0.0.1:2380
    # if registry_plugin equals to 'etcd', then
    # endpoints: 127.0.0.1:2379
    endpoints: 127.0.0.1:2379
  # the timeout for failing to establish a connection
  connect:
    timeout: 10s
  # the timeout for failing to read response of registry
  request:
    timeout: 30s
field description required value
registry.etcd.autoSyncInterval synchronization interval yes an integer time, like 30s/20m/10h
registry.etcd.compact.indexDelta version retained in etcd yes a 64 bit integer, like 100
registry.etcd.compact.interval compression interval yes an integer time, like 30s/20m/10h
registry.etcd.cluster.endpoints endpoints address yes string, like 127.0.0.1:2379
registry.etcd.connect.timeout the timeout for establishing a connection yes an integer time, like 30s/20m/10h
registry.etcd.request.timeout request timeout yes an integer time, like 30s/20m/10h

Download the installation package according to the environment information

  1. Download etcd package.
  2. Unzip, modify the configuration and start etcd.
  3. Download the latest release from ServiceComb Website.
  4. Decompress, modify /conf/app.yaml.
  5. Execute the start script to run service center

Mongodb

Download the mongodb according to your own environment. Mongodb Installation package address.

Configure app.yaml according to your needs.

mongo:
  cluster:
    uri: mongodb://localhost:27017
    sslEnabled: false
    rootCAFile: /opt/ssl/ca.pem
    verifyPeer: false
    certFile: /opt/ssl/client.crt
    keyFile: /opt/ssl/client.key
field description required value
registry.mongo.cluster.uri mongodb server address yes string, like mongodb://localhost:27017
registry.mongo.cluster.sslEnabled ssl enabled / not enabled yes false / true
registry.mongo.cluster.rootCAFile if sslEnabled equal true, should set CA file path yes string, like /opt/ssl/ca.pem
registry.mongo.cluster.verifyPeer insecure skip verify yes false / true
registry.mongo.cluster.certFile the cert file path need to be set according to the configuration of mongodb server no string, like /opt/ssl/client.crt
registry.mongo.cluster.keyFile the key file path need to be set according to the configuration of mongodb server no string, like /opt/ssl/client.key

Download the installation package according to the environment information

  1. Download mongodb package.
  2. Unzip, modify the configuration and start mongodb. Mongodb configure ssl.
  3. Download the latest release from ServiceComb Website.
  4. Decompress, modify /conf/app.yaml.
  5. Execute the start script to run service center

Heartbeat

Heartbeat configuration. Configure app.yaml according to your needs.

heartbeat:
  # configuration of websocket long connection
  websocket:
    pingInterval: 30s
  # heartbeat.kind="checker or cache"
  # if heartbeat.kind equals to 'cache', should set cacheCapacity,workerNum and taskTimeout
  # capacity = 10000
  # workerNum = 10
  # timeout = 10
  kind: cache
  cacheCapacity: 10000
  workerNum: 10
  timeout: 10
field description required value
heartbeat.websocket.pingInterval websocket ping interval. yes like 30s
heartbeat.kind there are two types of heartbeat plug-ins. With cache and without cache. yes cache/checker
heartbeat.cacheCapacity cache capacity yes a integer, like 10000
heartbeat.workerNum the number of working cooperations yes a integer, like 10
heartbeat.timeout processing task timeout (default unit: s) yes a integer, like 10

Deploying Service-Center

Deploying Service-Center in Cluster Mode

As Service-center is a stateless application so it can be seamlessly deployed in cluster mode to achieve HA. SC is dependent on the etcd to store the microservices information so you can opt for running etcd standalone or in cluster mode. Once you are done with installing the etcd either in cluster or standalone mode then you can follow the below steps to run the Service-Center.

Let’s assume you want to install 2 instances of Service-Center on VM with following details

Name Address
VM1 10.12.0.1
VM2 10.12.0.2

Here we assume your etcd is running on http://10.12.0.4:2379 (you can follow this guide to install etcd in cluster mode.)

Step 1

Download the SC release from here on all the VM’s.

# Untar the release
# tar -xvf service-center-X.X.X-linux-amd64.tar.gz

Note: Please don’t run start.sh as it will also start the etcd.

Step 2

Edit the configuration of the ip/port on which SC will run and etcd ip #### VM1

# vi conf/app.conf
#Replace the below values
httpaddr = 10.12.0.1
manager_cluster = "10.12.0.4:2379"

# Start the Service-center
./service-center
VM2
# vi conf/app.conf
#Replace the below values
httpaddr = 10.12.0.2
manager_cluster = "10.12.0.4:2379"

# Start the Service-center
./service-center

Note: In manger_cluster you can put the multiple instances of etcd in the cluster like

manager_cluster= "10.12.0.4:2379,10.12.0.X:2379,10.12.0.X:2379"
Step 3

Verify your instances

# curl http://10.12.0.1:30101/v4/default/registry/health
{
    "instances": [
        {
            "instanceId": "d6e9e976f9df11e7a72b286ed488ff9f",
            "serviceId": "d6e99f4cf9df11e7a72b286ed488ff9f",
            "endpoints": [
                "rest://10.12.0.1:30100"
            ],
            "hostName": "service_center_10_12_0_1",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1516012543",
            "modTimestamp": "1516012543"
        },
        {
            "instanceId": "16d4cb35f9e011e7a58a286ed488ff9f",
            "serviceId": "d6e99f4cf9df11e7a72b286ed488ff9f",
            "endpoints": [
                "rest://10.12.0.2:30100"
            ],
            "hostName": "service_center_10_12_0_2",
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1516012650",
            "modTimestamp": "1516012650"
        }
    ]
}

As we can see here the Service-Center can auto-discover all the instances of the Service-Center running in cluster, this auto-discovery feature is used by the Java-Chassis SDK to auto-discover all the instances of the Service-Center by knowing atleast 1 IP of Service-Center running in cluster.

In your microservice.yaml you can provide the SC IP of both the instance or any one instance, sdk can auto-discover other instances and use the other instances to get microservice details in case of failure of the first one.

cse:
  service:
    registry:
      address: "http://10.12.0.1:30100,http://10.12.0.2:30100"
      autodiscovery: true

In this case sdk will be able to discover all the instances of SC in cluster.

Integrate with Grafana

As Service-Center uses Prometheus lib to report metrics. Then it is easy to integrate with Grafana. Here is a DEMO to deploy Service-Center with Grafana, and this is the template file can be imported in Grafana.

After the import, you can get the view like blow.

_images/integration-grafana.PNG

Note: As the template has an ASF header, please remove the header first if you import this template file.

Quota management

Resources

  • service: microservice version quotas.
  • instance: instance quotas.
  • schema: schema quotas for each microservice.
  • tag: tag quotas for each microservice.
  • account: account quotas.
  • role: role quotas.

How to configure

1. Use configuration file

edit conf/app.yaml

quota:
  kind: buildin
  cap:
    service:
      limit: 50000
    instance:
      limit: 150000
    schema:
      limit: 100
    tag:
      limit: 100
    account:
      limit: 1000
    role:
      limit: 100
2. Use environment variable
  • QUOTA_SERVICE: the same as the config key quota.cap.service.limit
  • QUOTA_INSTANCE: the same as the config key quota.cap.instance.limit
  • QUOTA_SCHEMA: the same as the config key quota.cap.schema.limit
  • QUOTA_TAG: the same as the config key quota.cap.tag.limit
  • QUOTA_ACCOUNT: the same as the config key quota.cap.account.limit
  • QUOTA_ROLE: the same as the config key quota.cap.role.limit

Tracing

Report trace data

Edit the configuration of the tracing plugin
trace_plugin='buildin' # or empty
To zipkin server
_images/tracing-server.PNG
Add the zipkin server endpoint
# Export the environments
export TRACING_COLLECTOR=server
export TRACING_SERVER_ADDRESS=http://127.0.0.1:9411 # zipkin server endpoint

# Start the Service-center
./servicecenter
To file
_images/tracing-file.PNG
Customize the path of trace data file
# Export the environments
export TRACING_COLLECTOR=file
export TRACING_FILE_PATH=/tmp/servicecenter.trace # if not set, use ${work directory}/SERVICECENTER.trace

# Start the Service-center
./servicecenter

RBAC

you can choose to enable RBAC feature, after enable RBAC, all request to service center must be authenticated

Configuration file

Follow steps to enable this feature.

1.get rsa key pairs

openssl genrsa -out private.key 4096
openssl rsa -in private.key -pubout -out public.key

2.edit app.yaml

rbac:
  enable: true
  privateKeyFile: ./private.key # rsa key pairs
  publicKeyFile: ./public.key # rsa key pairs
auth:
  kind: buildin # must set to buildin

3.root account

before you start server, you need to set env to set your root account password. Please note that password must conform to the following set of rules: have at least 8 characters, have at most 32 characters, have at least one upper alpha, have at least one lower alpha, have at least one digit and have at lease one special character.

export SC_INIT_ROOT_PASSWORD='P4$$word'

At the first time service center cluster init, it will use this password to set up rbac module. you can revoke password by rest API after a cluster started. but you can not use SC_INIT_ROOT_PASSWORD to revoke password after a cluster started.

the initiated account name is fixed as “root”

To securely distribute your root account and private key, you can use kubernetes secret

Generate a token

Token is the only credential to access rest API, before you access any API, you need to get a token from service center

curl -X POST \
  http://127.0.0.1:30100/v4/token \
  -d '{"name":"root",
"password":"P4$$word"}'

will return a token, token will expire after 30m

{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTI4MzIxODUsInVzZXIiOiJyb290In0.G65mgb4eQ9hmCAuftVeVogN9lT_jNg7iIOF_EAyAhBU"}

Authentication

in each request you must add token to http header:

Authorization: Bearer {token}

for example:

curl -X GET \
  'http://127.0.0.1:30100/v4/default/registry/microservices/{service-id}/instances' \
  -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTI4OTQ1NTEsInVzZXIiOiJyb290In0.FfLOSvVmHT9qCZSe_6iPf4gNjbXLwCrkXxKHsdJoQ8w' 

Change password

You must supply a current password and token to update to new password

curl -X POST \
  http://127.0.0.1:30100/v4/account/root/password \
  -H 'Authorization: Bearer {your_token}' \
  -d '{
	"currentPassword":"P4$$word",
	"password":"P4$$word1"
}'

create a new account

You can create new account named “peter”, and his role is developer. How to add roles and allocate resources please refer to next section.

curl -X POST \
  http://127.0.0.1:30100/v4/account \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
	"name":"peter",
	"roles":["developer"],
	"password":"{strong_password}"
}'

Resource

All APIs of the ServiceComb system is mapping to a resource type. resource is list as below:

  • service: permission to discover, register service and instance
  • governance: permission to manage traffic control policy, such as rate limiting
  • service/schema: permission to register and discover contract
  • account: permission to manage accounts and account-locks
  • role: permission to manage roles
  • ops: permission to access admin API

declare a resource type that account can operate:

 {
  "resources": [
    {
      "type": "service"
    },
    {
      "type": "service/schema"
    }
  ]
}

Label

Define resource(only service resource) scope:

  • serviceName: specify service name
  • appId: specify which app that services belongs to
  • environment: specify env of the service
{
  "resources": [
    {
      "type": "service",
      "labels": {
        "serviceName": "order-service",
        "environment": "production"
      }
    },
    {
      "type": "service",
      "labels": {
        "serviceName": "order-service",
        "environment": "acceptance"
      }
    }
  ]
}

Verbs

Define what kind of action could be applied to a resource by an account, has 4 kinds:

  • get
  • delete
  • create
  • update

declare resource type and action:

{
  "resources": [
    {
      "type": "service"
    },
    {
      "type": "account"
    }
  ],
  "verbs": [
    "get"
  ]
}

Roles

Two default roles are provided after RBAC init:

  • admin: can operate account and role resource
  • developer: can operate any resource except account and role resource

each role include perms elements to indicates what kind of resource can be operated by this role, for example:

A role “TeamA” can get and create any services but can only delete or update “order-service”

{
  "name": "TeamA",
  "perms": [
    {
      "resources": [
        {
          "type": "service"
        }
      ],
      "verbs": [
        "get",
        "create"
      ]
    },
    {
      "resources": [
        {
          "type": "service",
          "labels": {
            "serviceName": "order-service"
          }
        }
      ],
      "verbs": [
        "update",
        "delete"
      ]
    }
  ]
}

create new role and how to use

You can also create a new role and give perms to this role.

  1. You can add new role and allocate resources to new role. For example, a new role named “tester” and allocate resources to “tester”.
curl -X POST \
  http://127.0.0.1:30100/v4/role \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "TeamA",
  "perms": [
    {
      "resources": [
        {
          "type": "service"
        }
      ],
      "verbs": [
        "get",
        "create"
      ]
    },
    {
      "resources": [
        {
          "type": "service",
          "labels": {
            "serviceName": "order-service"
          }
        }
      ],
      "verbs": [
        "update",
        "delete"
      ]
    }
  ]
}'

2.then, assigning roles “tester” and “tester2” to user account “peter”, “tester2” is a empty role has not any resources.

curl -X POST \
  http://127.0.0.1:30100/v4/account \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {your_token}' \
  -H 'Content-Type: application/json' \
  -d '{
	"name":"peter",
	"password":"{strong_password}",
	"roles": ["TeamA"]
}'

3.Next, generate a token for the user.

curl -X POST \
  http://127.0.0.1:30100/v4/token \
  -d '{
  	"name":"peter",
  	"password":"{strong_password}"
  }'

4.finally, user “peter” carry token to access resources.

for example

curl -X POST \
  http://127.0.0.1:30100/v4/default/registry/microservices \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {peter_token}' \
  -d '{
        "service": {
          "serviceId": "11111-22222-33333",
          "appId": "test",
          "serviceName": "test",
          "version": "1.0.0"
        }
}'

would be ok.

curl -X DElETE \
  http://127.0.0.1:30100/v4/default/registry/microservices \
  -H 'Accept: */*' \
  -H 'Authorization: Bearer {peter_token}' 

has no permission to operate.

Fast Registration

Fast registration feature, can support millions of instance registration.

This feature is primarily used in scenarios where an ultra-high performance registry is required and is not recommended for scenarios where performance requirements are low or instance levels are small.

This feature is turn off by default, if need fast register you should open the fast registration switch.

When this feature is open, you can call register interface API, the service center will put instances in the queue, then direct return instanceId to users, at last, register registry asynchronously by timing task.

QuickStart Guide

1.Config the fast registration queue size, to enable fast registration

If queueSize is bigger than 0, fast registration will trigger

The default configuration of /conf/app.yaml is as follows:

register:
  fastRegistration:
    # this config is only support in mongo case now
    # if fastRegister.queueSize is > 0, enable to fast register instance, else register instance in normal case
    # if fastRegister is enabled, instance will be registered asynchronously,
    # just put instance in the queue and return instanceID, and then register through the timing task
    queueSize: 0

Config queueSize in /conf/app.yaml, for example set queueSize to 50w

register.fastRegistration.queueSize=500000

2.Start service center

./service-center

3.Call the registry interface

Call the registry interface, you will receive InstanceID soon, now a fast registration has been completed once

  • Registered instance APIs can be called concurrently
  • There is a slight delay between returning the instanceID and actually registering the instance to database, but 100w instance registration delays are within seconds
  • If more than 15 minutes did not discovery the instance ,there may be a problem with the environment. The client can register again with the InstanceID that has been generated and return to user

Process Design

The flow chart is as follows:

_images/fast_register_design.pngregister_image

Normal Case:

If the fast registration is enabled, it is put in the queue and eventually registered to MongoDB in batch by timed tasks(The time interval is 100 millimeters)

Abnormal Case:

  1. If the connection between Mongo and service center is broken, and the registration fails, the instance will be put into the failure queue and registered again
  2. If the registration fails for 3 consecutive times, the fuse will be cut off for 5s and resume after successful registration
  3. If a single instance fails to register for more than 500 times, the instance will be discarded, and the SDK will register again when the heartbeat finds that the instance does not exist

Attention

1.The database with ETCD scenario does not have this feature; only the Mongo database scenario does

2.Because the registration is asynchronous, there will be a certain amount of delay in the registration, the delay is basically in the second level

Performance Test

The performance of a fast registration instance is about three times better than that of a normal registration

best performance test:

|service center| mongoDB | concurrency|tps |latency|queueSize| |—-| —-| —-|—-|—-|—-| |8u16g2|16u32g|200|9w |1mm|100w| |16u32g2|16u32g|500|15w|2mm|100w|

Limits

Exceed the limits may cause internal errors or performance degradation.

Http Server

  • Request head size: 3KB
  • Request body size: 2048KB

Microservice

  • Metadata size: 5KB
  • Schema content size: 2048KB
  • Properties size: 3KB

Instance

  • Metadata size: 5KB
  • Properties size: 3KB

Development guide

Development Guide

This chapter is about how to implement the feature of micro-service discovery with ServiceCenter, and you can get more detail at here

Micro-service registration

curl -X POST \
  http://127.0.0.1:30100/registry/v3/microservices \
  -H 'content-type: application/json' \
  -H 'x-domain-name: default' \
  -d '{
	"service":
	{
		"appId": "default",
		"serviceName": "DemoService",
		"version":"1.0.0"
	}
}'

and then you can get the ‘DemoService’ ID like below:

{
    "serviceId": "a3fae679211211e8a831286ed488fc1b"
}

Instance registration

mark down the micro-service ID and call the instance registration API, according to the ServiceCenter definition: One process should be registered as one instance

curl -X POST \
  http://127.0.0.1:30100/registry/v3/microservices/a3fae679211211e8a831286ed488fc1b/instances \
  -H 'content-type: application/json' \
  -H 'x-domain-name: default' \
  -d '{
	"instance": 
	{
	    "hostName":"demo-pc",
	    "endpoints": [
		    "rest://127.0.0.1:8080"
	    ]
	}
}'

the successful response like below:

{
    "instanceId": "288ad703211311e8a831286ed488fc1b"
}

if all are successful, it means you have completed the micro-service registration and instance publish

Discovery

the next step is that discovery the micro-service instance by service name and version rule

curl -X GET \
  'http://127.0.0.1:30100/registry/v3/instances?appId=default&serviceName=DemoService&version=latest' \
  -H 'content-type: application/json' \
  -H 'x-consumerid: a3fae679211211e8a831286ed488fc1b' \
  -H 'x-domain-name: default'

here, you can get the information from the response

{
    "instances": [
        {
            "instanceId": "b4c9e57f211311e8a831286ed488fc1b",
            "serviceId": "a3fae679211211e8a831286ed488fc1b",
            "version": "1.0.0",
            "hostName": "demo-pc",
            "endpoints": [
                "rest://127.0.0.1:8080"
            ],
            "status": "UP",
            "healthCheck": {
                "mode": "push",
                "interval": 30,
                "times": 3
            },
            "timestamp": "1520322915",
            "modTimestamp": "1520322915"
        }
    ]
}

Module mechanism

Service center(SC) support an extend modules mechanism that developers can new some features in SC easily.

Just 4 steps, you can add a module in service center

  1. Create a module(package) under the github.com/apache/servicecomb-service-center/server package.
  2. Here you just need to implement the controller and service interfaces in your module.
  3. And register service to SC when the module initializes.
  4. Import the package in github.com/apache/servicecomb-service-center/server/bootstrap/bootstrap.go

Quit start for the RESTful module

Implement the ROAServantService interface.

package hello

import (
	"net/http"
	"github.com/apache/servicecomb-service-center/pkg/rest"
)

type HelloService struct {
}

func (s *HelloService) URLPatterns() []rest.Route {
	return []rest.Route{
		{
		    rest.HTTP_METHOD_GET, // Method is one of the following: GET,PUT,POST,DELETE
		    "/helloWorld", // Path contains a path pattern
		    s.SayHello, // rest callback function for the specified Method and Path
        },
	}
}

func (s *HelloService) SayHello(w http.ResponseWriter, r *http.Request) {
    // say Hi
}

Register the service in SC ROA framework when the module initializes.

package hello

import roa "github.com/apache/servicecomb-service-center/pkg/rest"

func init() {
    roa.RegisterServant(&HelloService{})
}

Modify bootstarp.go file to import your module.

// module
import _ "github.com/apache/servicecomb-service-center/server/hello"

About GRPC module

To see govern module for help.

Quota plugins

Standard Plugins

  • buildin: standard quota management implement, read local quota configuration and limit the resource quotas.

How to extend

  1. Implement the interface Manager in server/plugin/quota/quota.go
type Manager interface {
	RemandQuotas(ctx context.Context, t ResourceType)
	GetQuota(ctx context.Context, t ResourceType) int64
	Usage(ctx context.Context, req *Request) (int64, error)
}
  1. Declare new instance func and register it to plugin manager
import "github.com/apache/servicecomb-service-center/pkg/plugin"

plugin.RegisterPlugin(plugin.Plugin{Kind: quota.QUOTA, Name: "your plugin name", New: NewPluginInstanceFunc})
  1. edit conf/app.yaml
quota:
  kind: ${your plugin name}

Multiple Datacenters

ServiceCenter Aggregate Architecture

Now, service center has supported multiple datacenters deployment. Its architecture likes below.

architecture

architecture

As shown in the figure, we deploy an SC(Service-Center) cluster independently under each DC(datacenter). Each SC cluster manages the micro-service instances under the DC under which it belongs, and the DCs are isolated from each other. Another implementation of the discovery plug-in, Service-Center Aggregate service, can access multiple SC instances and periodically pull up micro-service instance information so that if some micro-services can request aggregate, cross-DCs can be implemented using the same API as SC cluster.

If SC aggregate is not deployed globally, SC also supports another way to implement multiple DCs discovery, as shown below.

architecture

architecture

The difference between the two approaches is that global deployment aggregate can divert service discovery traffic, the whole architecture is more like a read-write separation architecture, and the SC of each DC manage microservice information independently, which reduces the complexity. So we recommend the first architecture.

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different DCs with following details.

Cluster Datacenter Address
sc-1 dc-1 10.12.0.1
sc-2 dc-2 10.12.0.2

you can follow this guide to install Service-Center in cluster mode. After that, we can deploy a Service-Center Aggregate service now.

Start Service-Center Aggregate

Edit the configuration of the ip/port on which SC aggregate will run, we assume you launch it at 10.12.0.3.

vi conf/app.conf
# Replace the below values
httpaddr = 10.12.0.3
discovery_plugin = servicecenter
registry_plugin = buildin
self_register = 0
manager_cluster = "sc-1=http://10.12.0.1:30100,sc-2=http://10.12.0.2:30100"

# Start the Service-center
./service-center

Note: Please don’t run start.sh as it will also start the etcd.

Confirm the service is OK

We recommend that you use scctl, and using cluster command which makes it very convenient to verify OK.

scctl --addr http://10.12.0.3:30100 get cluster
#   CLUSTER |        ENDPOINTS
# +---------+-------------------------+
#   sc-1    | http://10.12.0.1:30100
#   sc-2    | http://10.12.0.2:30100

Example

Here we show a golang example of multiple datacenters access, where we use an example of the go-chassis project, assuming that below.

Microservice Datacenter Address
Client dc-1 10.12.0.4
Server dc-2 10.12.0.5

Notes: go-chassis application can run perfectly in the above 2 architectures. If you are using java-chassis, there are only support the service center with the second architecture at the moment. You can ref to here for more details of the second architecture.

Start Server

Edit the configuration of the ip/port on which Server will register.

vi examples/discovery/server/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      type: servicecenter
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

go run examples/discovery/server/main.go
Confirm the multiple datacenters discovery is OK

Since client is not a service, we check its running log.

2018-09-29 10:30:25.556 +08:00 INFO registry/bootstrap.go:69 Register [Client] success
...
2018-09-29 10:30:25.566 +08:00 WARN servicecenter/servicecenter.go:324 55c783c5c38e11e8951f0a58ac00011d Get instances from remote, key: default Server
2018-09-29 10:30:25.566 +08:00 INFO client/client_manager.go:86 Create client for highway:Server:127.0.0.1:8082
...
2018/09/29 10:30:25 AddEmploy ------------------------------ employList:<name:"One" phone:"15989351111" >

Using Java chassis for cross data center access

Now that you’ve seen two multiple data center architectures of the Service Center, we’ll show you how to implement micro-service cross data center access with the java-chassis framework.

architecture

architecture

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different DCs with following details.

Cluster Datacenter Address
sc-1 dc-1 10.12.0.1
sc-2 dc-2 10.12.0.2
Start Service-Center

Edit the configuration of the ip/port on which SC will run in dc-1. And here we assume your etcd is running on http://127.0.0.1:2379 (you can follow this guide to install etcd in cluster mode.)

vi conf/app.conf
# Replace the below values
httpaddr = 10.12.0.1
discovery_plugin = aggregate
aggregate_mode = "etcd,servicecenter"
manager_name = "sc-1"
manager_addr = "http://127.0.0.1:2379"
manager_cluster = "sc-1=http://10.12.0.1:30100,sc-2=http://10.12.0.2:30100"

# Start the Service-center
./service-center

Notes: + manager_name is the alias of the data center. manager_addr is the etcd cluster client urls. manager_cluster is the full Service Center clusters list. + To deploy Service Center in dc-2, you can repeat the above steps and just change the httpaddr value to 10.12.0.2.

Confirm the service is OK

We recommend that you use scctl, and using cluster command which makes it very convenient to verify OK.

scctl --addr http://10.12.0.3:30100 get cluster
#   CLUSTER |        ENDPOINTS
# +---------+-------------------------+
#   sc-1    | http://10.12.0.1:30100
#   sc-2    | http://10.12.0.2:30100

Example

Here we show a java example of multiple datacenters access, where we use an example, assuming that below.

Microservice Datacenter Address
Client dc-1 10.12.0.4
Server dc-2 10.12.0.5
Start springmvc-server

Edit the configuration of the ip/port on which springmvc-server will register.

vi src/main/resources/microservice.yaml

Replace the below values

cse:
  service:
    registry:
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

mvn clean install
java -jar target/springmvc-server-0.0.1-SNAPSHOT.jar
Start springmvc-client

Edit the configuration of the ip/port on which springmvc-client will register.

vi src/main/resources/microservice.yaml

Replace the below values

cse:
  service:
    registry:
      address: http://10.12.0.1:30100 # the address of SC in dc-1

Run the Client

mvn clean install
java -jar target/springmvc-client-0.0.1-SNAPSHOT.jar
Confirm the multiple datacenters discovery is OK

Since springmvc-client is not a service, we check its running log.

...
[2018-10-19 23:04:42,800/CST][main][INFO]............. test finished ............ org.apache.servicecomb.demo.TestMgr.summary(TestMgr.java:83)

Integrate with Kubernetes

A simple demo to deploy ServiceCenter Cluster in Kubernetes. ServiceCenter supports two deploy modes: Platform Registration and Client Side Registration

Requirements

  1. There is a Kubernetes cluster.
  2. Already install kubectl and helm client in your local machine.
  3. (Optional) Already deploy helm tiller on Kubernetes.

Platform Registration

The platform registration indicates that the ServiceCenter automatically accesses kubernetes cluster, and micro-service instances can discover service and endpoints information through the ServiceCenter.

Notes: After deployment, it only create ServiceCenter cluster in the default namespace.

Use Kubectl

You can use the command kubectl apply to deploy ServiceCenter cluster.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
kubectl apply -f <(helm template --name servicecomb --namespace default service-center/)
Use Helm Install

You can also use the helm commands to deploy ServiceCenter cluster if you already deploy helm tiller.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
helm install --name servicecomb --namespace default service-center/

Client Side Registration

The client-side registration representational ServiceCenter receives and processes registration requests from micro-service instances and stores instance information in etcd.

Notes: After deployment, it create ServiceCenter cluster and etcd cluster in the default namespace.

Use Kubectl

You can use the command kubectl apply to deploy ServiceCenter cluster.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
# install etcd cluster
kubectl apply -f <(helm template --name coreos --namespace default etcd/)
# install sc cluster
kubectl apply -f <(helm template --name servicecomb --namespace default \
    --set sc.discovery.type="etcd" \
    --set sc.discovery.clusters="http://coreos-etcd-client:2379" \
    --set sc.registry.enabled=true \
    --set sc.registry.type="etcd" \
    service-center/)
Use Helm Install

You can also use the helm commands to deploy ServiceCenter cluster if you already deploy helm tiller.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
# install etcd cluster
helm install --name coreos --namespace default etcd/
# install sc cluster
helm install --name servicecomb --namespace default \
    --set sc.discovery.type="etcd" \
    --set sc.discovery.clusters="http://coreos-etcd-client:2379" \
    --set sc.registry.enabled=true \
    --set sc.registry.type="etcd" \
    service-center/

Confirm the deploy is ok

By default, the ServiceCenter frontend use NodePort service type to deploy in Kubernetes.

  1. You can execute the command kubectl get pod, to check all pods are running.
  2. You can also point your browser to http://${NODE}:30103 to view the dashboard of ServiceCenter.
  3. (Recommended) You can use scctl tool to list micro-service information.
# ./scctl get svc --addr http://servicecomb-service-center:30100 -owide
  DOMAIN  |                  NAME               |            APPID        | VERSIONS | ENV | FRAMEWORK  |        ENDPOINTS         | AGE  
+---------+-------------------------------------+-------------------------+----------+-----+------------+--------------------------+-----+
  default | servicecomb-service-center-frontend | service-center-frontend | 0.0.1    |     | Kubernetes | http://172.0.1.101:30103 | 2m   
  default | servicecomb-service-center          | service-center          | 0.0.1    |     | Kubernetes | http://172.0.1.102:30100 | 2m

Clean up

If you use the kubectl to deploy, take deploy mode platform registration as example.

cd ${PROJECT_ROOT}/examples/infrastructures/k8s
kubectl delete -f <(helm template --name servicecomb --namespace default service-center/)

If you use helm tiller to deploy, take deploy mode platform registration as example.

cd ${PROJECT_ROOT}/k8s
helm delete --purge servicecomb

Helm Configuration Values

  • Service Center (sc)
    • deployment (bool: true) Deploy this component or not.
    • service
      • type (string: “ClusterIP”) The kubernetes service type.
      • externalPort (int16: 30100) The external access port. If the type is ClusterIP, it is set to the access port of the kubernetes service, and if the type is NodePort, it is set to the listening port of the node.
    • discovery
      • type (string: “aggregate”) The Service Center discovery type. This can also be set to etcd or servicecenter. aggregate let Service Center merge the discovery sources and applications can discover microservices from these through using Service Center HTTP API. etcd let Service Center start with client registration mode, all the microservices information comes from application self registration. servicecenter let Service Center manage multiple Service Center clusters at the same time. It can be applied to multiple datacenters scenarios.
      • aggregate (string: “k8s,etcd”) The discovery sources of aggregation, only enabled if type is set to aggregate. Different discovery sources are merged together by commas(,), indicating that the Service Center will aggregate service information through these sources. Now support these scenarios: k8s,etcd(for managing services from multiple platforms), k8s,servicecenter(for accessing distinct kubernetes clusters).
      • clusters (string: “sc-0=http://127.0.0.1:2380”) The cluster address managed by Service Center. If type is set to etcd, its format is http(s)://{etcd-1},http(s)://{etcd-2}. If type is set to other value, its format is {cluster name 1}=http(s)://{cluster-1-1},http(s)://{cluster-1-2},{cluster-2}=http(s)://{cluster-2-1}
    • registry
      • enabled (bool: false) Register Service Center itself or not.
      • type (string: “embedded_etcd”) The class of backend storage provider, this decide how Service Center store the microservices information. embedded_etcd let Service Center store data in local file system, it means distributed file system is need if you deploy high availability Service Center. etcd let Service Center store data in existing etcd cluster, then Service Center could be a stateless service. builin disabled the storage.
      • name (string: “sc-0”) The Service Center cluster name, only enabled if type is set to embedded_etcd or etcd.
      • addr (string: “http://127.0.0.1:2380”) The backend storage provider address. This value should be a part of sc.discovery.clusters value.
  • UI (frontend)
    • deployment (bool: true) Deploy this component of not.
    • service
      • type (string: “NodePort”) The kubernetes service type.
      • externalPort (int16: 30103) The external access port. If the type is ClusterIP, it is set to the access port of the kubernetes service, and if the type is NodePort, it is set to the listening port of the node.

Access Distinct Clusters

ServiceCenter Aggregate Architecture

In the Multiple Datacenters article, we introduce the aggregation architecture of service center. In fact, this aggregation architecture of service center can be applied not only to the scene deployed in multiple datacenters, but also to the scene services data aggregation in multiple kubernetes clusters.

architecture

architecture

The service centers deployed in distinct kubernetes clusters can communicate with each other, sync the services data from other kubernetes clusters. Applications can discover services from different the kubernetes cluster through using the service center HTTP API. It solve the problem of isolation between kubernetes clusters.

Quick Start

Let’s assume you want to install 2 clusters of Service-Center in different Kubernetes clusters with following details.

Cluster Kubernetes namespace Node
sc1 k1 default 10.12.0.1
sc2 k2 default 10.12.0.2

To facilitate deployment, we will publish the service address of the service center in [NodePort] mode.

Deploy the Service Center

Using helm to deploy the service center to kubernetes here, the instructions for specific values can be referred to here.

Take deployment to kubernetes cluster 1 as an example.

# login the k1 kubernetes master node to deploy sc1
git clone git@github.com:apache/servicecomb-service-center.git
cd examples/infrastructures/k8s
helm install --name k1 \
    --set sc.discovery.clusters="sc2=http://10.12.0.2:30100" \
    --set sc.discovery.aggregate="k8s\,servicecenter" \
    --set sc.registry.type="buildin" \
    --set sc.service.type=NodePort \
    service-center/

Notes: To deploy Service Center in kuberbetes cluster 2, you can repeat the above steps and just change the sc.discovery.clusters value to sc1=http://10.12.0.1:30100.

Start Server

Edit the configuration of the ip/port on which Server will register.

vi examples/discovery/server/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      type: servicecenter
      address: http://10.12.0.2:30100 # the address of SC in dc-2

Run the Server

go run examples/discovery/server/main.go
Start Client

Edit the configuration of the ip/port on which Client will register and discover.

vi examples/discovery/client/conf/chassis.yaml

Replace the below values

cse:
  service:
    registry:
      registrator:
        type: servicecenter
        address: http://10.12.0.1:30100 # the address of SC in dc-1
      serviceDiscovery:
        type: servicecenter
        address: http://10.12.0.3:30100 # the address of SC Aggregate

Run the Client

go run examples/discovery/client/main.go
Confirm the multiple datacenters discovery is OK

Since client is not a service, we check its running log.

2018-09-29 10:30:25.556 +08:00 INFO registry/bootstrap.go:69 Register [Client] success
...
2018-09-29 10:30:25.566 +08:00 WARN servicecenter/servicecenter.go:324 55c783c5c38e11e8951f0a58ac00011d Get instances from remote, key: default Server
2018-09-29 10:30:25.566 +08:00 INFO client/client_manager.go:86 Create client for highway:Server:127.0.0.1:8082
...
2018/09/29 10:30:25 AddEmploy ------------------------------ employList:<name:"One" phone:"15989351111" >

Profiling

service center integrated pprof

Configuration

server:
  pprof:
    mode: 1

Run pprof

go tool pprof http://localhost:30100/debug/pprof/profile?seconds=30

Design Guides

Design Guides

Service-Center Design

Service-Center(SC) is a service registry that allows services to register their instance information and to discover providers of a given service. Generally, SC uses etcd to store all the information of micro-service and its instances.

_images/aggregator-design.PNG
  • API Layer: To expose the RESTful and gRPC service.
  • Metedata: The business logic to manage microservice, instance, schema, tag, dependency and ACL rules.
  • Server Core: Including data model, requests handle chain and so on.
  • Aggregator: It is the bridge between Core and Registry, includes the cache manager and indexer of registry.
  • Registry Adaptor: An abstract layer of registry, exposing a unified interface for upper layer calls.

Below is the diagram stating the working principles and flow of SC.

On StartUp

Here describes a standard client registration process. We assume that micro-services are written using java-chassis sdk or go-chassis sdk. So when micro-service boots up then java-chassis sdk does the following list of tasks.

  1. On startup provider registers the micro-service to SC if not registered earlier and also register its instance information like its Ip and Port on which instance is running.
  2. SC stores the provider information in etcd.
  3. On startup consumer retrieves the list of all provider instance from SC using the micro-service name of the provider.
  4. Consumer sdk stores all the information of provider instances in its cache.
  5. Consumer sdk creates a web socket connection to SC to watch all the provider instance information, if there is any change in the provider then sdk updates it’s cache information.
_images/onStartup.PNG
Communication between Consumer -> Provider

Once the bootup is successful then the consumer can communicate with providers flawlessly, below is the diagram illustrating the communication between provider and consumer.

_images/communication.PNG
Provider instance regularly sends heartbeat signal every 30 seconds to SC, if SC does not receive the heartbeat for particular instance then the information in etcd expires and the provider instance information is removed.
Consumer watches the information of provider instances from SC and if there is any change then the cache is updated.
When Consumer needs to communicate to Provider then consumer reads endpoints of the provider instances from cache and do loadbalancing to communicate to Provider.

Note: Feel free to contribute to this document.

Plug-in mechanism

Required

  1. Go version 1.8(+)
  2. Compile service-center with GO_EXTLINK_ENABLED=1 and CGO_ENABLED=1
  3. The plugin file name must has suffix ‘_plugin.so’
  4. All plugin interface files are in plugin package

Plug-in names

  1. auth: Customize authentication of service-center.
  2. uuid: Customize micro-service/instance id format.
  3. auditlog: Customize audit log for any change done to the service-center.
  4. cipher: Customize encryption and decryption of TLS certificate private key password.
  5. quota: Customize quota for instance registry.
  6. tracing: Customize tracing data reporter.
  7. tls: Customize loading the tls certificates in server

Example: an authentication plug-in

Step 1: code auth.go

auth.go is the implement from auth interface

package main

import (
    "fmt"
    "net/http"
)

func Identify(*http.Request) error {
	// do something
	return nil
}
Step 2: compile auth.go
GOPATH=$(pwd) go build -o auth_plugin.so -buildmode=plugin auth.go
Step 3: move the plug-in in plugins directory
mkdir ${service-center}/plugins
mv auth_plugin.so ${service-center}/plugins
Step 4: run service-center
cd ${service-center}
./servicecenter

Release Notes

Service-Center Release

How to publish release documents

Step 1

Confirm what this version mainly does

https://issues.apache.org/jira/projects/SCB/issues/SCB-2270?filter=allopenissues
Step 2

Collect major issues

Step 3

Write the releaseNotes-xx.xx.xx.md


Running Apache Rat tool

This guide will help you to run the Apache Rat tool on service-center source code. For running the tool please follow the below guidelines.

Step 1

Clone the Servcice-Center code and download Apache Rat tool.

git clone https://github.com/apache/servicecomb-service-center
wget http://mirrors.tuna.tsinghua.edu.cn/apache/creadur/apache-rat-0.13/apache-rat-0.13-bin.tar.gz

# Untar the release
tar -xvf apache-rat-0.13-bin.tar.gz

# Copy the jar in the root directory
cp  apache-rat-0.13/apache-rat-0.13.jar ./
Step 2

Run the Rat tool using the below command

java -jar apache-rat-0.13.jar -a -d servicecomb-service-center/ -e '(.+(\.svg|\.md|\.MD|\.cer|\.tpl|\.json|\.yaml|\.proto|\.pb.go))|(.gitignore|.gitmodules|ux|docs|vendor|licenses|bower.json|cert_pwd|glide.yaml|go.mod|go.sum)'

Below is the list of the files which has been excluded from the list of RAT tool.

  • *.md *.MD *.html: Skip all the Readme and Documentation file like Api Docs.
  • .gitignore .gitmodules .travis.yml : Skip the git files and travis file.
  • manifest **vendor : Skip manifest and all the files under vendor.
  • bower.json : Skip bower installation file
  • cert_pwd server.cer trust.cer : Skip ssl files
  • *.tpl : Ignore template files
  • glide.yaml go.mod go.sum : Skip dependency config files
  • docs : Skip document files
  • .yaml : Skip configuration files
  • ux : Skip foreground files
  • .proto .pb.go : Skip proto files

You can access the latest RAT report here


Make a release

See here


Archive

Step 1
If you are doing release for the first time, you can read this document.

Execute script, archive source code and generate summary and signature

bash scripts/release/archive.sh apache-servicecomb-service-center 2.0.0 littlecui@apache.org

list current directory

-rw-rw-r--  1 ubuntu ubuntu 3.1M Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz
-rw-rw-r--  1 ubuntu ubuntu  862 Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz.asc
-rw-rw-r--  1 ubuntu ubuntu  181 Jun  8 20:35 apache-servicecomb-service-center-2.0.0-src.tar.gz.sha512
Step 2

PUSH to apache dev repo

svn co https://dist.apache.org/repos/dist/dev/servicecomb/
cd servicecomb/
mkdir -p 2.0.0
cp apache-servicecomb-service-center-* 2.0.0/
svn add .
svn ci --username xxx --password xxx -m "Add the Service-Center 2.0.0 version"

Add tag

Step 1

Push new tag to repo

git clone https://github.com/apache/servicecomb-service-center.git

git tag vx.x.x

git push origin vx.x.x
Step 2

Edit the tag to make x.x.x version release

published content should use releaseNotes-vx.x.x.md
Step 3

Initiate version voting —— send email to dev@servicecomb.apache.org

mail format : use plain text

mail subject : [VOTE] Release Apache ServiceComb Service-Center version 2.1.0

mail content :

Hi all,

Please review and vote on Apache ServiceCenter 2.1.0 release.

The release candidate has been tagged in GitHub as 2.1.0, available
here:
https://github.com/apache/servicecomb-service-center/releases/tag/v2.1.0

Release Notes are here:
https://github.com/apache/servicecomb-service-center/blob/v2.1.0/docs/release/releaseNotes-2.1.0.md

Thanks to everyone who has contributed to this release.

The artifacts (source, signature and checksum) corresponding to this release
candidate can be found here:
https://dist.apache.org/repos/dist/dev/servicecomb/servicecomb-service-center/2.1.0/

This has been signed with PGP key, public KEYS file is available here:
https://dist.apache.org/repos/dist/dev/servicecomb/KEYS

To verify and build, you can refer to following wiki:
https://github.com/apache/servicecomb-service-center#building--running-service-center-from-source

The vote will be open for at least 72 hours.
[ ] +1 Approve the release
[ ] +0 No opinion
[ ] -1 Do not release this package because ...

Best Regards,
robotljw
Step 4

After the vote is passed, upload the release package of the relevant version

1.Edit the v.x.x.x release

2.Attach binaries by dropping them here or selecting them

apache-servicecomb-service-center-x.x.x-darwin-amd64.tar.gz

apache-servicecomb-service-center-x.x.x-linux-amd64.tar.gz

apache-servicecomb-service-center-x.x.x-windows-amd64.tar.gz

Release Notes

Apache ServiceComb Service-Center (incubating) version 1.0.0

New Features/Improvements:
  • Make ETCD connection more Resilient
  • Make ETCD request timeout configurable
  • Support TLS Plugin
  • Optimize Governance API for Searching Schema
  • Optimize Find Instance API
  • Use glide for dependency management
  • Add release binaries for MacOS
  • Add Toplogy View and Instance View in UI
Bug-Fix:
  • Fix connection leak in etcd
  • Fix Lose of events in some scenarios
  • Fix Cache mismatch.
For more details please click here

Release Notes

Apache ServiceComb Service-Center (incubating) version 1.0.0-m1

API Changes :
  • Added new API to get All Schema List.
  • Add Service statistics in the Governance API.
  • Add Self-microservice information in the Governance API.
New Features/Improvements:
  • Support discovery of SC instances by Consumer micro-service.
  • Event driven implementation for dependency rules.
  • Make compact interval configurable and avoid defragmentation of the database when compacted.
  • Update the default quota’s limit of service/instance count.
  • Update black-list rule controls in discovery.
Metrics :
  • Added support for Prometheus metrics exposure.
  • Added templates for Grafana Dashboard.
Optimization:
  • Optimized Restful clients and plugins loader.
  • Optimized Service-Count calculation rule.
  • Use CDN for resolving all the dependencies of frontend.
Bug-Fix:
  • Fix panic issue while deleting instance and invalid metrics request.
  • Fix modify schema response issue and heart-beat failure when etcd has no leader.
  • Fix batch delete api to exempt from unregistering service-center microservice.
  • Fix watcher wrong event sequence when SC modify resource concurrently
  • Fix discovery of default APP services in Shared service mode

Release Notes

Apache ServiceComb Service-Center (incubating) version 1.0.0-m2

API Changes :
  • Governance API also returns self microservice information.
  • Governance API should not show the shared microservices information.
  • Support batch delete in registry.
  • Change the type of force query parameter to bool in delete api.
New Features/Improvements:
  • Support Async Rest Template.
  • Support of Testing Schema from frontend.
  • Support log rotation.
  • Support ipv6.
  • Static data return instanceCount by domain.
  • Convenient store extension.
  • Retry the connection to etcd in-case of failure.
  • Show proper error details in frontend.
  • Support Default TLS Cipher Suites.
  • Proxy Frontend request to Service-Center.
  • Use bower to resolve the dependency of frontend.
  • Add registry server HC mechanism.
Bug-Fix:
  • Fix issue of filter instance using service-tags.
  • Fix re-creation of tracing file.
  • Fix SC cannot check duplicate endpoints when registered with etcd.
  • Fix wrong parentId in tracing data.
  • Fix wrong log print in update Instance.
  • Fix null pointer reference in zipkin plugin.
  • Fix delete service should delete dependency key.
  • Fix cache does not match with etcd store.
  • Fix remove the backup log files which are expired.
  • Fix typos in response of schema api’s.
  • Fix incorrect metric label value.
  • Fix register instance withe same id will create redundant endpoints.
For more details please click here

Release Notes

    Release Notes - Apache ServiceComb - Version service-center-1.1.0

Bug

  • [SCB-744] - Wrong error code returned in Find API
  • [SCB-851] - Can not get providers if consumer have * dependency rule
  • [SCB-857] - Provider rule of consumer can not be removed
  • [SCB-863] - build script for docker image gives an error
  • [SCB-890] - Lost changed event when bootstrap with embedded etcd
  • [SCB-912] - rest client still verify peer host when verifyPeer flag set false
  • [SCB-924] - Etcd cacher should re-list etcd in fixed time interval
  • [SCB-927] - The latest Lager is not compatible
  • [SCB-929] - Concurrent error in update resource APIs
  • [SCB-930] - Service Center Frontend stops responding in Schema test if Schema has '\"' character in the description
  • [SCB-934] - Get all dependency rules will panic
  • [SCB-938] - Should check self presevation max ttl
  • [SCB-951] - Wrong help information in scctl
  • [SCB-958] - The instance delete event delay more than 2s
  • [SCB-977] - Dependencies will not be updated in 5min when micro service is changed
  • [SCB-980] - The dependency will be broken when commit etcd failed
  • [SCB-981] - Can not remove the microservice and instance properties
  • [SCB-991] - Optimize args parsing
  • [SCB-993] - Bug fixes
  • [SCB-994] - SC can not read the context when client using grpc api
  • [SCB-1027] - Fix the core dump in SC which compiled with go1.10+

New Feature

  • [SCB-815] - Support deploy in Kubernetes
  • [SCB-850] - Support discover instances from kubernetes cluster
  • [SCB-869] - SC cli tool
  • [SCB-902] - Support service discovery by Service Mesh
  • [SCB-914] - Support using scctl to download schemas
  • [SCB-941] - Support multiple datacenter deployment
  • [SCB-949] - Support access distinct kubernetes clusters

Improvement

  • [SCB-418] - How to deploy a SC cluster in container environment
  • [SCB-435] - Add plugin document in ServiceCenter
  • [SCB-792] - More abundant metrics information
  • [SCB-796] - Update the paas-lager package
  • [SCB-797] - More information in dump API
  • [SCB-807] - Limit the topology view to only 100 microservices.
  • [SCB-808] - Aut-refresh the dashboard and service-list page every 10sec
  • [SCB-809] - Verify the chinese version of the UI as all chinese text was translated using Google Translate
  • [SCB-816] - Update the protobuf version to 1.0.0
  • [SCB-840] - Support configable limit in buildin quota plugin
  • [SCB-844] - Update golang version to 1.9.2
  • [SCB-848] - Uses zap to replace the paas-lager
  • [SCB-862] - Using different environment variables in image
  • [SCB-892] - output plugins configs in version api
  • [SCB-899] - Support go1.11 module maintaining
  • [SCB-901] - Making service registration api idempotent
  • [SCB-937] - Customizable tracing sample rate
  • [SCB-953] - Support sync distinct Kubernetes service types to service-center
  • [SCB-978] - Fix translation issues for Chinese Locale on First Load
  • [SCB-983] - Output the QPS per domain
  • [SCB-984] - Add Health Check command
  • [SCB-1015] - Support the forth microservice version number registration

Task

  • [SCB-720] - Show the instance statistics in Dashboard and Instance List in Side Menu
  • [SCB-1016] - Change git repo name
  • [SCB-1028] - Prepare 1.1.0 Service-Center Release