Running Ceptor in Docker
It is quite easy to run Ceptor inside a Docker image, but there are some caveats that you need to know of.
Caveats You Need to Know About
First, Docker behaves differently regarding memory, and by default, the JVM reads the memory size of the machine that docker runs inside, not the amount of memory that is reserved to the individual docker image.
If a docker containers memory limit is hit, the Linux kernel simply kills the process with no easily identifiable messages, so troubleshooting can be difficult.
Second, many container engines (e.g. OpenShift) have by default very low amount of CPU reserved for an image, causing starvation where the processes are starved of CPU for dozens of seconds, and even minutes. If this occurs, internal diagnostics within Ceptor will start complaining that something is wrong, connections between component might be considered timed out and closed, and for extreme delays, components might be killed and restarted.
Memory-related command-line arguments in Java 8
To work around the memory issues, the JVM you start must have the options -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
defined on the java command line - this enables the JVM to recognize the limits imposed by the docker container and adjust accordingly.
When using Java 11, no such options are needed - it will figure it out by default.
CPU Settings
You should in general reserve at least 2 cores to Ceptor - the amount needed depends heavily upon load, and the number of services started up in the same container. If you start seeing errors in the logs, such as "It took more than 5 seconds (38256 msecs) to...." - then it is a sign that the processes are starved of CPU.
Ceptor Docker Images
Ceptor provides 5 docker images;
- ceptor-base
- ceptor-base-gateway
- ceptor-base-microgateway
- ceptor-demo
- ceptor-demo-gateway
These images reside on Docker Hub - to get read access to the repositories, please register your docker hub account at sales@ceptor.io and we will ensure that your docker account is allowed to access our repositories.
Once you have access, you can go to https://hub.docker.com/u/ceptor to view the list of available repositories and images.
ceptor-base
This is the base/core image - it contains all the code needed, but little or no configuration - all the remaining ceptor images build upon this one.
ceptor-base-gateway
This image is based upon ceptor-base
but it removes everything not needed by the gateway itself - this is a useful image to base a single Gateway (or API Gateway) upon, since it has all non-essential files removed so it is suitable to use in e.g. DMZ zones.
ceptor-base-microgateway
This image is like ceptor-base-gateway based upon ceptor-base, but keeps full ceptor functionality providing a scaled down configuration with very low resource requirements - this is an excellent choice for basing a microgateway upon.
ceptor-demo
This image can be used as a demonstration for Ceptor - it contains a single instance of both Ceptor Server and Gateway, along with example configuration - starting everything up in the same, single instance.
ceptor-demo-gateway
The ceptor-demo-gateway image contains a demonstration version of a standalone gateway, based upon the ceptor-base-gateway
image. It starts only a single instance of the gateway.
Dockerfile
This is an example of a Dockerfile for the Ceptor Demonstration image, which runs all components in a single docker image, and only exposes port 4243 to the console, port 8443 to the gateway and port 9443 to the internal gateway (used for admin REST APIs)
FROM ceptor-base MAINTAINER Ceptor ApS <support@ceptor.io> LABEL description="Ceptor Demonstration" EXPOSE 4243 8443 9443 COPY pp/testdatabase /ceptor/template/testdatabase COPY pp/config /ceptor/template/config COPY docker/cfg /ceptor/template/config COPY docker/ceptor_demo.sh /ceptor/ceptor.sh RUN tr -d '\r' < /ceptor/ceptor.sh > /usr/local/bin/ceptor-entrypoint.sh \ && chmod +x /usr/local/bin/ceptor-entrypoint.sh \ && rm /ceptor/ceptor.sh \ && mkdir /ceptor/testdatabase \ #Allow it to be run by non-root users && chgrp -R 0 /ceptor/template/testdatabase \ && chmod -R g=u /ceptor/template/testdatabase \ && chgrp -R 0 /ceptor/template/config \ && chmod -R g=u /ceptor/template/config ENV elasticsearch_server=elasticsearch ENV elasticsearch_enabled=false VOLUME ["/ceptor/logs", "/ceptor/statistics", "/ceptor/config", "/ceptor/testdatabase"]
Note the elasticsearch_server and elasticsearch_enabled environment variables - to use Elasticsearch for API Usage information, simply set elasticsearch_enabled
to true
and elasticsearch_server
to the elasticsearch hostname.
Docker Docker ENTRYPOINT
As you can see from the Dockerfile, it sets an entrypoint copied from ceptor.sh - that files is below, its purpose it to check if the config
and testdatabase
directories are empty on startup, and if they are - it will populate them with default values - this makes it easier to use persistent volumes on e.g. OpenShift without needing to copy files around manually before starting.
#!/bin/bash # # Check if config and database directories are empty. # They might be if volume is mounted to an empty dir - if so, # copy from template so we can startup - then run the program # supplied in the parameters # #if nothing in config, copy from template if [ "$(ls -A /ceptor/config)" ]; then echo "Files are already in /ceptor/config - reusing them" else cp -R /ceptor/template/config/* /ceptor/config fi #if nothing in testdatabase, copy from template if [ "$(ls -A /ceptor/testdatabase)" ]; then echo "Files are already in /ceptor/testdatabase - reusing them" else cp -R /ceptor/template/testdatabase/* /ceptor/testdatabase fi exec "$@"
Which Ports to Expose
See Deployment Firewall Settings which contains a list of ports different components use by default.
Persistent volumes
By default, in the Demo image, Ceptor needs 5 persistent volumes or directories;
- /ceptor/config
The configuration is persisted here - any changes made will be saved to the config folder - this should contain a separate persistent image for each config server. - /ceptor/logs
Logfiles are by default written to this directory by all Ceptor components. - /ceptor/statistics
The statistics server stores its files in this directory. - /ceptor/work
If using Apache Ignite, it stores its file in the work directory, so each Ignite instance should have its own. - /ceptor/testdatabase
When using the default test derby database for useradmin, or as datastore for APIs, it persists its files in this directory.
Different Approaches
Running Using the Service Launcher
Normally outside Docker, Ceptor is started using the service launcher / bootstrapper, which is a process that starts up and reads ceptor-launch.xml
from the contents of this file, it then starts several sub-processes, and monitors and restarts them if any failures occur.
This approach can be used in docker, but beware that the total memory requirement is the sum of the maximum heap + non-heap memory and container overhead for all processes.
Telling Service Launcher to Run in Standalone Mode
The Service Launcher is capable of running in standalone mode - this means it loads the JVM definitions within ceptor-launch.xml
but does not spawn separate child JVMs processes - instead it loads it all in the current process. This is often easier if running just a single JVM.
Below is an example of changed startup options which runs in standalone mode.
CMD ["java", "-XX:+HeapDumpOnOutOfMemoryError", "-Dorg.jboss.logging.provider=slf4j", "--add-opens", "jdk.management/com.sun.management.internal=ALL-UNNAMED", "-jar", "/ceptor/bin/boot.jar", "-standalone", "/ceptor/config/ceptor_launch.xml"]
Splitting into Multiple Containers
It is also possible to split into multiple containers, and start each component in its own - this is easily enough accomplished using parameters for the launcher, which can tell it to start just a single JVM, and even not to start any client-process, but instead load the services in the parent process.
This approach is preferred by many, since it allows scaling each individual component up and down depending on load.
It has a significant downside though, and that is increased complexity in the setup - it complicates configuration, load balancing etc. and since there are significant differences between different container engines and their configuration / capabilities, there is no one good guide that covers them all.
A tradeoff can be a good idea, where e.g. the gateway and session controller are kept as separate containers and able to scale up and down depending on load - then configuration servers, console, logserver etc. can be kept together
JVMs and Services
The default docker command to start looks like this:
CMD ["java", "-jar", "/ceptor/bin/boot.jar", "/ceptor/config/ceptor_launch.xml"]
You can add more arguments to control which modules/jvms to startup - see more information here: Ceptor Getting Started
The default ceptor_launch.xml
file in the docker image contains these JVMs (note that this differs slightly from default distribution to save memory within the docker container).
Here is an appreviated list with most of the settings removed for clarity.
<jvm name="cfg" ...> <service name="configserver1" launcherclass="dk.itp.managed.service.ConfigServerLauncher"> <service name="logserver1" serviceclass="dk.itp.peer2peer.log.server.LogServer" /> <service name="statisticsserver1" serviceclass="dk.itp.managed.service.StatisticsService" /> <jvm name="ignite" ...> <service name="ignite1" launcherclass="io.ceptor.ignite.IgniteLauncher"> <jvm name="console" ...> <service name="ceptorconsole" launcherclass="io.ceptor.console.ConsoleLauncher"> <jvm name="session_user" ...> <service name="derby" launcherclass="io.ceptor.datastore.db.DerbyLauncher"> <service name="sessionctrl1" serviceclass="dk.itp.security.passticket.server.PTSServer" /> <service name="useradmin1" launcherclass="dk.itp.pp.useradmin.server.UserAdminServerLauncher" /> <service name="useradminapp" launcherclass="dk.asseco.pp.ua.UserAdminAppLauncher"> <jvm name="gateway" ...> <service name="gateway1" launcherclass="io.ceptor.gateway.GatewayLauncher"> <jvm name="apideveloperportal" ...> <service name="apideveloperportal" launcherclass="io.ceptor.apimanager.developerportal.DeveloperPortalLauncher"> <jvm name="demoapp" ...> <service name="webserver1" launcherclass="dk.itp.managed.service.GenericWarLauncher">
If you e.g only want to start a gateway within a particular docker container (or pod if using Kubernetes terminology), add a few extra command-line arguments like this:
CMD ["java", "-jar", "/ceptor/bin/boot.jar", "/ceptor/config/ceptor_launch.xml", "-jvms", "gateway"]
or if you want to start only the sessionctrl1 and useradmin1 within the session_user jvm, add these arguments instead:
CMD ["java", "-jar", "/ceptor/bin/boot.jar", "/ceptor/config/ceptor_launch.xml", "-jvms", "session_user", "-services", "sessionctrl1;useradmin1"]
Note that when starting the configuration server, you should only have one instance of the statisticsserver and one instance of the master config server - you can have as many slave configserver instances as you want, but there can only be a single master running at a time and only a single statisticsserver.
When you only start a single JVM, there is no reason to use the launcher and let it start a client process - in this case, add -standalone
to the command-line arguments, and also add any JVM options, e.g.
java -Djava.awt.headless=true -Xnoclassgc -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap-jar \ ceptor/bin/boot.jar \ /ceptor/config/ceptor_launch.xml \ -jvms session_user \ -services sessionctrl1;useradmin1
When you run it this way, only a single process will run within the docker container, using the least amount of resources.
Autoscaling
Autoscaling does not make sense for all components - there is no reason for instance to autoscale the configuration server, logserver or the console - but autoscaling the gateway or even the session controller might be a good idea.
OpenShift Notes...
OpenShift has a known bug which we have run into about pulling from private repositories.... info and workaround here: https://access.redhat.com/solutions/3568231
Note that the workaround is NOT (at the time of writing) implemented in the Online version of OpenShift.
Using Docker Compose
The following file is an example that configures docker compose to start an Elasticsearch, as well as a single demonstration Ceptor container which uses Elasticsearch for API Usage information.
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - discovery.type=single-node ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - net1 ceptor1: image: ceptor/ceptor-demo:latest container_name: ceptor1 environment: - cluster.name=docker-cluster - elasticsearch_server=elasticsearch - elasticsearch_enabled=true ulimits: memlock: soft: -1 hard: -1 volumes: - ceptor1_log:/ceptor/logs - ceptor1_stat:/ceptor/statistics - ceptor1_config:/ceptor/config - ceptor1_db:/ceptor/testdatabase ports: - 4243:4243 - 8443:8443 - 9443:9443 depends_on: - elasticsearch networks: - net1 volumes: esdata1: driver: local ceptor1_log: driver: local ceptor1_stat: driver: local ceptor1_config: driver: local ceptor1_db: driver: local networks: net1:
Sample OpenShift template
Below, is an example template for deployment to OpenShift - it sets up a demonstration project with one Postgres database, and one Ceptor instance.
To use it, you should modify the default datastore within Ceptor to:
{"datastore": { "name": "API Postgres datastore", "description": "Datastore implementation using a postgres database", "factoryclass": "io.ceptor.datastore.db.JDBCDataStoreFactory", "configuration": { "databasetype": "postgres", "pool": { "db.checkexecuteupdate": true, "db.connectionurl": "jdbc:postgresql://${environment:POSTGRESQL_SERVICE_HOST}:${environment:POSTGRESQL_SERVICE_PORT}/ceptordb", "db.credentials": "${environment:POSTGRESQL_PASSWORD}", "db.debug": false, "db.drivername": "org.postgresql.Driver", "db.delaygetconnection": "0", "db.getconnectiontimeout": 60000, "db.initialpoolsize": 5, "db.maxconnectionlife": 600, "db.maxconnectionusage": 5000, "db.searchstartwithwildcard": false, "db.testonreserve": false, "db.username": "${environment:POSTGRESQL_USER}" } }, "id": "datastore-postgres" }}
and use this YAML file with OpenShift:
base_env: &base_env - name: TENANT_NAME value: "${TENANT_NAME}" apiVersion: v1 kind: Template metadata: name: Ceptor description: Ceptor - more info at https://ceptor.io objects: - apiVersion: v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: Ceptor ApS labels: app: ceptor-demo name: ceptor-demo spec: lookupPolicy: local: false tags: - annotations: openshift.io/imported-from: ceptor/ceptor_docker from: kind: DockerImage name: ${CEPTOR_IMAGE} generation: 2 importPolicy: {} name: latest referencePolicy: type: Source - apiVersion: v1 kind: Service metadata: annotations: openshift.io/generated-by: Ceptor ApS labels: app: ceptor-demo name: ceptor-demo spec: ports: - name: 4243-tcp port: 4243 protocol: TCP targetPort: 4243 - name: 8443-tcp port: 8443 protocol: TCP targetPort: 8443 selector: app: ceptor-demo deploymentconfig: ceptor-demo sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "ceptor-persistent" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "1Gi" - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: Ceptor ApS labels: app: ceptor-demo name: ceptor-demo spec: replicas: 1 revisionHistoryLimit: 10 selector: app: ceptor-demo deploymentconfig: ceptor-demo strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: ceptor-demo deploymentconfig: ceptor-demo spec: containers: - image: ${CEPTOR_IMAGE} imagePullPolicy: IfNotPresent name: ceptor-demo ports: - containerPort: 4243 protocol: TCP - containerPort: 8443 protocol: TCP resources: requests: cpu: 2000m memory: 2000Mi limits: cpu: 4000m memory: 3000Mi # livenessProbe: # initialDelaySeconds: 30 # periodSeconds: 10 # tcpSocket: # port: 8443 # readinessProbe: # tcpSocket: # port: 8443 # initialDelaySeconds: 30 # timeoutSeconds: 5 volumeMounts: - mountPath: /ceptor/config name: ceptor-volume subPath: config - mountPath: /ceptor/logs name: ceptor-volume subPath: logs - mountPath: /ceptor/statistics name: ceptor-volume subPath: statistics - mountPath: /ceptor/testdatabase name: ceptor-volume subPath: testdatabase - mountPath: /ceptor/work name: ceptor-volume subPath: work env: - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: "${DATABASE_SERVICE_NAME}" - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: "${DATABASE_SERVICE_NAME}" dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 volumes: - name: ceptor-volume persistentVolumeClaim: claimName: ceptor-persistent test: false triggers: - type: ConfigChange - imageChangeParams: automatic: true containerNames: - ceptor-demo from: kind: ImageStreamTag name: ceptor-demo:latest type: ImageChange - apiVersion: v1 kind: Route metadata: labels: app: ceptor-demo name: console spec: host: console-${TENANT_NAME}.${WILDCARD_DOMAIN} port: targetPort: 4243-tcp tls: termination: passthrough to: kind: Service name: ceptor-demo weight: 100 wildcardPolicy: None - apiVersion: v1 kind: Route metadata: labels: app: ceptor-demo name: gateway spec: host: gateway-${TENANT_NAME}.${WILDCARD_DOMAIN} port: targetPort: 8443-tcp tls: termination: passthrough to: kind: Service name: ceptor-demo weight: 100 wildcardPolicy: None - apiVersion: v1 kind: Secret metadata: annotations: template.openshift.io/expose-database_name: "{.data['database-name']}" template.openshift.io/expose-password: "{.data['database-password']}" template.openshift.io/expose-username: "{.data['database-user']}" name: "${DATABASE_SERVICE_NAME}" stringData: database-name: "${POSTGRESQL_DATABASE}" database-password: "${POSTGRESQL_PASSWORD}" database-user: "${POSTGRESQL_USER}" - apiVersion: v1 kind: Service metadata: annotations: template.openshift.io/expose-uri: postgres://{.spec.clusterIP}:{.spec.ports[?(.name=="postgresql")].port} name: "${DATABASE_SERVICE_NAME}" spec: ports: - name: postgresql nodePort: 0 port: 5432 protocol: TCP targetPort: 5432 selector: name: "${DATABASE_SERVICE_NAME}" sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: "${DATABASE_SERVICE_NAME}" spec: accessModes: - ReadWriteOnce resources: requests: storage: "${VOLUME_CAPACITY}" - apiVersion: v1 kind: DeploymentConfig metadata: annotations: template.alpha.openshift.io/wait-for-ready: 'true' name: "${DATABASE_SERVICE_NAME}" spec: replicas: 1 selector: name: "${DATABASE_SERVICE_NAME}" strategy: type: Recreate template: metadata: labels: name: "${DATABASE_SERVICE_NAME}" spec: containers: - capabilities: {} env: - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: "${DATABASE_SERVICE_NAME}" - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: "${DATABASE_SERVICE_NAME}" - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: "${DATABASE_SERVICE_NAME}" image: " " imagePullPolicy: IfNotPresent livenessProbe: exec: command: - "/usr/libexec/check-container" - "--live" initialDelaySeconds: 120 timeoutSeconds: 10 name: postgresql ports: - containerPort: 5432 protocol: TCP readinessProbe: exec: command: - "/usr/libexec/check-container" initialDelaySeconds: 5 timeoutSeconds: 1 resources: limits: memory: "${MEMORY_LIMIT}" securityContext: capabilities: {} privileged: false terminationMessagePath: "/dev/termination-log" volumeMounts: - mountPath: "/var/lib/pgsql/data" name: "${DATABASE_SERVICE_NAME}-data" dnsPolicy: ClusterFirst restartPolicy: Always volumes: - name: "${DATABASE_SERVICE_NAME}-data" persistentVolumeClaim: claimName: "${DATABASE_SERVICE_NAME}" triggers: - imageChangeParams: automatic: true containerNames: - postgresql from: kind: ImageStreamTag name: postgresql:${POSTGRESQL_VERSION} namespace: "${NAMESPACE}" lastTriggeredImage: '' type: ImageChange - type: ConfigChange status: {} parameters: - description: Maximum amount of memory the container can use. displayName: Memory Limit name: MEMORY_LIMIT required: true value: 512Mi - description: The OpenShift Namespace where the ImageStream resides. displayName: Namespace name: NAMESPACE value: openshift - description: The name of the OpenShift Service exposed for the database. displayName: Database Service Name name: DATABASE_SERVICE_NAME required: true value: postgresql - description: Username for PostgreSQL user that will be used for accessing the database. displayName: PostgreSQL Connection Username from: user[A-Z0-9]{3} generate: expression name: POSTGRESQL_USER required: true - description: Password for the PostgreSQL connection user. displayName: PostgreSQL Connection Password from: "[a-zA-Z0-9]{16}" generate: expression name: POSTGRESQL_PASSWORD required: true - description: Name of the PostgreSQL database accessed. displayName: PostgreSQL Database Name name: POSTGRESQL_DATABASE required: true value: ceptordb - description: Volume space available for data, e.g. 512Mi, 2Gi. displayName: Volume Capacity name: VOLUME_CAPACITY required: true value: 1Gi - description: Version of PostgreSQL image to be used (9.4, 9.5, 9.6 or latest). displayName: Version of PostgreSQL Image name: POSTGRESQL_VERSION required: true value: '9.6' - name: WILDCARD_DOMAIN description: Root domain for the wildcard routes. Eg. example.com will generate xxxxx.example.com. displayName: Domain name required: true - name: TENANT_NAME description: "Tenant name under the root" displayName: Hostname required: true value: "ceptor" - name: CEPTOR_IMAGE displayName: Ceptor docker image name description: Ceptor image to use required: true value: "ceptor/ceptor-demo:latest"
To build your own custom configurations using ceptor-demo image as a base, you can use a Dockerfile similar to this:
FROM ceptor/ceptor-base:latest EXPOSE 4243 8443 9443 # Place any configuration files you want to override the default settings in the cfg directory - e.g. your own ceptor_launch.xml and ceptor-configuration.xml # Copy from cfg, overwriting the default configuration. ADD cfg /ceptor/template/config
© Ceptor ApS. All Rights Reserved.