2018年12月5日星期三
Golang, resolver and segmentation violation
A colleague of mine came into weird connection issues when trying to install our BaaS solution on Alicloud for a client. Here is a very thorough explanation of the problem we've encountered - https://yq.aliyun.com/articles/238940.
2018年10月10日星期三
Ansible, Jinja2 and Quoting
TL;DR
Ansible allows variable substitution in plays through Jinja2 templating. If a variable starts a string, the string needs to be quoted, either by single quotes or double quotes, so as not to be treated as a mapping by YAML, e.g.
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
or
- hosts: app_servers
vars:
app_path: '{{ base_path }}/22'
The Story
I had been aware of the whole variable substitution thing, but under the impression that quoting can only be done with double quotes, probably because of the official example. I was reading some code by a colleague of mine and realized single quote also works under such circumstances. I then digged a little further and here is what I've got.
The Rationale
Ansible allows variable substitution in plays through Jinja2 templating. If a variable starts a string, the string needs to be quoted, either by single quotes or double quotes, so as not to be treated as a mapping by YAML, e.g.
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
or
- hosts: app_servers
vars:
app_path: '{{ base_path }}/22'
The Story
I had been aware of the whole variable substitution thing, but under the impression that quoting can only be done with double quotes, probably because of the official example. I was reading some code by a colleague of mine and realized single quote also works under such circumstances. I then digged a little further and here is what I've got.
The Rationale
- Single quotes or double quotes are for the YAML interpreter. The difference is that certain escaping sequences could be applied in double quotes for special meanings, like "\n" for a new line, but in single quotes they will be kept as is.
- Jinja2 variable substitution, denoted by {{}}, is not escaping. It is for the Jinja2 interpreter, one level higher in Ansible's processing stack.
- Quoting a Jinja2 variable started string is merely so that it is not treated as a mapping by the YAML interpreter. So if a string contains variables but doesn't start with any, it doesn't even need quoting.
2018年8月31日星期五
Ginkgo and goroutine
Ginkgo is a fantastic Behavior Driven Development (BDD) framework for golang and I've been using it for a while. The other day I hit an unusual error:
panic:
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.
But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call
defer GinkgoRecover()
at the top of the goroutine that caused this panic.
followed by a bunch of stack traces.
This is not good, obviously. By checking the documentation you'd get an example which basically does what had been told in the above error message - putting defer GinkgoRecover() at the beginning of the goroutine.
This was important, not because it eliminated the panic - it didn't do that, but because it let Ginkgo rescue the panic and tell you where it actually went wrong - missing a mock function in my case.
Unexpected call to *mock_storage.MockBackendNew.ChaincodeDeploymentUpdate([mychannel unique_mycc_id mycc { done 0001-01-01 00:00:00 +0000 UTC}]) at /Users/tony/workspace/go/src/github.com/arxanchain/baymax/vendor/github.com/arxanchain/chain-mgmt/core/storage/mock/mock_storage/mock_storage.go:336 because: there are no expected calls of the method "ChaincodeDeploymentUpdate" for that receiver
After fixing whatever the root cause is, you could of course remove the GinkgoRecover function call from your goroutine, because you've already got a successful test suite and don't need the diagnosis.
panic:
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.
But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call
defer GinkgoRecover()
at the top of the goroutine that caused this panic.
followed by a bunch of stack traces.
This is not good, obviously. By checking the documentation you'd get an example which basically does what had been told in the above error message - putting defer GinkgoRecover() at the beginning of the goroutine.
This was important, not because it eliminated the panic - it didn't do that, but because it let Ginkgo rescue the panic and tell you where it actually went wrong - missing a mock function in my case.
Unexpected call to *mock_storage.MockBackendNew.ChaincodeDeploymentUpdate([mychannel unique_mycc_id mycc { done 0001-01-01 00:00:00 +0000 UTC}]) at /Users/tony/workspace/go/src/github.com/arxanchain/baymax/vendor/github.com/arxanchain/chain-mgmt/core/storage/mock/mock_storage/mock_storage.go:336 because: there are no expected calls of the method "ChaincodeDeploymentUpdate" for that receiver
After fixing whatever the root cause is, you could of course remove the GinkgoRecover function call from your goroutine, because you've already got a successful test suite and don't need the diagnosis.
2018年6月18日星期一
Number of forks causing Ansible playbook to fail
TL;DR - The defaults.forks settings from /etc/ansible/ansible.cfg affects the parallelism of a playbook running. If a play that involves complicated wait_for logic hangs when there is a relatively high number of hosts that the play runs against while running fine when the number is low, it could be that the number of forks is exhausted and the wait_for condition could not be reached.
Rationale -
We have this Ansible role to install a private IPFS cluster, in which we wait_for the bootstrapping peer to have fully started before installing other peers. The detailed sequence is -
The default value of defaults.forks is 5. If we have, say, 7 hosts that IPFS needs to run on, chances are that all 5 forks would block on the wait_for part. There would then be no more fork for the bootstrapping peer to finish its tasks, and then wait_for would not unlock until the default timeout of 300 seconds is reached and the playbook fails.
Increasing the defaults.forks value to 7 solved our problem.
P.S. It was by reading the official document for the second time that I realized exhaustion of defaults.forks was causing our IPFS installation failure. So many thanks to the Ansible team and hey, do read!
Rationale -
We have this Ansible role to install a private IPFS cluster, in which we wait_for the bootstrapping peer to have fully started before installing other peers. The detailed sequence is -
- All other peers wait while the bootstrapping peer (the first one in the cluster in our case) is being deployed - configuration files generated, docker container started and service started.
- After service started, the bootstrapping peer prints its ID to a file.
- All other peers' wait is unlocked by the bootstrapping peer's ID file, their entry point script (in which the bootstrapping peer's ID is referred to by ipfs bootstrap add) generated along with other configuration files, their containers and services started.
The default value of defaults.forks is 5. If we have, say, 7 hosts that IPFS needs to run on, chances are that all 5 forks would block on the wait_for part. There would then be no more fork for the bootstrapping peer to finish its tasks, and then wait_for would not unlock until the default timeout of 300 seconds is reached and the playbook fails.
Increasing the defaults.forks value to 7 solved our problem.
P.S. It was by reading the official document for the second time that I realized exhaustion of defaults.forks was causing our IPFS installation failure. So many thanks to the Ansible team and hey, do read!
2018年6月15日星期五
Dump HTTP request with golang
httputil.DumpRequest works like a charm if you would like to dump an HTTP request to its string representation in order to, say, tell precisely what information a front-end application sends to a back-end application. It takes into consideration regenerating an exact same body if it's instructed to dump the body (the second parameter), keeping subsequent operations intact.
2018年6月7日星期四
How to wait_for a remote file with Ansible
I've recently realized that the wati_for module of Ansible is a bit tricky when it comes to waiting for a remote file, i.e. a file that's supposed to be present or absent on a host other than the one that's running the Ansible task.
If you check the doc, there is this host attribute which is rather intriguing. But when you do put it on a test drive, it won't work. My understanding is host is meant for paring with port, together checking the readiness of, say, a restful service.
The correct way then to wait for a remote file is through delegate_to, without further ado.
If you check the doc, there is this host attribute which is rather intriguing. But when you do put it on a test drive, it won't work. My understanding is host is meant for paring with port, together checking the readiness of, say, a restful service.
The correct way then to wait for a remote file is through delegate_to, without further ado.
2018年6月3日星期日
How to search for Chinese words in Rocket.Chat
TL;DR: do a regular expression search, i.e. wrap your keywords with slashes - /关键字/.
Discussions: https://github.com/RocketChat/Rocket.Chat/issues/713
Discussions: https://github.com/RocketChat/Rocket.Chat/issues/713
2018年3月31日星期六
How to resize a qcow2 virtual disk image
Not sure why I haven't written this down, since I did it countless times back in my earliest days with IBM in 2010 or 2011. Today I need to do it again, growing a 20GB qcow2 disk image to, say, 80GB. Here is how, with self explanatory comments.
root@fabv1:/var/lib/libvirt/images# virsh list --all | grep tony1 # make sure the virtual machine has been shut down
- tony1 shut off
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# mv tony1.qcow2 tony1.small.qcow2 # we don't want to accidentally ruin any data
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img convert -p -O raw tony1.small.qcow2 tony1.large.raw # parted and resize2fs can only operate on a raw disk
(100.00/100%)
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img info tony1.large.raw
image: tony1.large.raw
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img resize tony1.large.raw 80G # first enlarge the virtual disk size
Image resized.
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img info tony1.large.raw
image: tony1.large.raw
file format: raw
virtual size: 80G (85899345920 bytes)
disk size: 20G
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- print
Model: (file)
Disk /var/lib/libvirt/images/tony1.large.raw: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 21.5GB 21.5GB primary ext4 boot
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- resizepart 1 -1 # and then modify the partition table
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- print
Model: (file)
Disk /var/lib/libvirt/images/tony1.large.raw: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 85.9GB 85.9GB primary ext4 boot
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# kpartx -av tony1.large.raw
add map loop0p1 (252:0): 0 167768160 linear /dev/loop0 2048
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# e2fsck -f /dev/mapper/loop0p1 # a must before resize2fs
e2fsck 1.42.9 (4-Feb-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/loop0p1: 636755/1310720 files (0.4% non-contiguous), 3724312/5242368 blocks
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# resize2fs -p /dev/mapper/loop0p1 # finally resize the filesystem
resize2fs 1.42.9 (4-Feb-2014)
Resizing the filesystem on /dev/mapper/loop0p1 to 20971020 (4k) blocks.
The filesystem on /dev/mapper/loop0p1 is now 20971020 blocks long.
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# kpartx -d tony1.large.raw
loop deleted : /dev/loop0
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# losetup -a # run "losetup -d /dev/loop0" if it shows in the output
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img convert -p -O qcow2 tony1.large.raw tony1.qcow2
(100.00/100%)
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# ll | grep tony1
-rw-r--r-- 1 root root 85899345920 Apr 1 02:00 tony1.large.raw
-rw-r--r-- 1 root root 21268922368 Apr 1 02:03 tony1.qcow2
-rw-r--r-- 1 libvirt-qemu kvm 21319450624 Mar 31 00:18 tony1.small.qcow2
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# chown libvirt-qemu:kvm tony1.qcow2 # we've been manipulating the files as root so far
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# ll | grep tony1
-rw-r--r-- 1 root root 85899345920 Apr 1 02:00 tony1.large.raw
-rw-r--r-- 1 libvirt-qemu kvm 21268922368 Apr 1 02:03 tony1.qcow2
-rw-r--r-- 1 libvirt-qemu kvm 21319450624 Mar 31 00:18 tony1.small.qcow2
root@fabv1:/var/lib/libvirt/images#
A side note: nowadays people tend to use the high level command virt-resize combined with qemu-img resize, which makes you less concerned about all these underlying details. I haven't tried it yet though.
root@fabv1:/var/lib/libvirt/images# virsh list --all | grep tony1 # make sure the virtual machine has been shut down
- tony1 shut off
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# mv tony1.qcow2 tony1.small.qcow2 # we don't want to accidentally ruin any data
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img convert -p -O raw tony1.small.qcow2 tony1.large.raw # parted and resize2fs can only operate on a raw disk
(100.00/100%)
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img info tony1.large.raw
image: tony1.large.raw
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img resize tony1.large.raw 80G # first enlarge the virtual disk size
Image resized.
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img info tony1.large.raw
image: tony1.large.raw
file format: raw
virtual size: 80G (85899345920 bytes)
disk size: 20G
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- print
Model: (file)
Disk /var/lib/libvirt/images/tony1.large.raw: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 21.5GB 21.5GB primary ext4 boot
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- resizepart 1 -1 # and then modify the partition table
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# parted tony1.large.raw -- print
Model: (file)
Disk /var/lib/libvirt/images/tony1.large.raw: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 85.9GB 85.9GB primary ext4 boot
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# kpartx -av tony1.large.raw
add map loop0p1 (252:0): 0 167768160 linear /dev/loop0 2048
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# e2fsck -f /dev/mapper/loop0p1 # a must before resize2fs
e2fsck 1.42.9 (4-Feb-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/loop0p1: 636755/1310720 files (0.4% non-contiguous), 3724312/5242368 blocks
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# resize2fs -p /dev/mapper/loop0p1 # finally resize the filesystem
resize2fs 1.42.9 (4-Feb-2014)
Resizing the filesystem on /dev/mapper/loop0p1 to 20971020 (4k) blocks.
The filesystem on /dev/mapper/loop0p1 is now 20971020 blocks long.
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# kpartx -d tony1.large.raw
loop deleted : /dev/loop0
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# losetup -a # run "losetup -d /dev/loop0" if it shows in the output
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# qemu-img convert -p -O qcow2 tony1.large.raw tony1.qcow2
(100.00/100%)
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# ll | grep tony1
-rw-r--r-- 1 root root 85899345920 Apr 1 02:00 tony1.large.raw
-rw-r--r-- 1 root root 21268922368 Apr 1 02:03 tony1.qcow2
-rw-r--r-- 1 libvirt-qemu kvm 21319450624 Mar 31 00:18 tony1.small.qcow2
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# chown libvirt-qemu:kvm tony1.qcow2 # we've been manipulating the files as root so far
root@fabv1:/var/lib/libvirt/images#
root@fabv1:/var/lib/libvirt/images# ll | grep tony1
-rw-r--r-- 1 root root 85899345920 Apr 1 02:00 tony1.large.raw
-rw-r--r-- 1 libvirt-qemu kvm 21268922368 Apr 1 02:03 tony1.qcow2
-rw-r--r-- 1 libvirt-qemu kvm 21319450624 Mar 31 00:18 tony1.small.qcow2
root@fabv1:/var/lib/libvirt/images#
A side note: nowadays people tend to use the high level command virt-resize combined with qemu-img resize, which makes you less concerned about all these underlying details. I haven't tried it yet though.
2018年3月28日星期三
Building Hyperledger Fabric v1.1.0 on Ubuntu 14.04
TLDR - Don't do that. Build it on Ubuntu 16.04 and it'll work like a breeze.
Rationale -
Here by building Hyperledger Fabric, I mean calling make docker against the code and running through the e2e_cli example that demonstrates a proof-of-concept blockchain network. If you've done this before, you'll know that in order to do this, you'll need the following 2 base docker images that will be used to further build Fabric images like peer, orderer, etc.
root@tony2:/opt/gopath/src/github.com/hyperledger/fabric# docker images | grep base
hyperledger/fabric-baseimage x86_64-0.4.7 390ac2e95bc7 37 hours ago 1.41GB
hyperledger/fabric-baseos x86_64-0.4.7 c0e784934c4e 37 hours ago 152MB
I know that it's the 0.4.6 version base images that go with the official 1.1.0 release. I've just started migrating from 1.0.0 since yesterday, when the 0.4.7 version has rolled out. So I figured why not giving it a try? Simply modify BASEIMAGE_RELEASE in the Makefile should do the trick.
These images provide the runtime environment for the Fabric binary executables. When building these executables, there were actually a bunch of warnings that go something like:
Using 'getpwuid_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking.
I obviously didn't give enough attention to these warnings when I first saw them. And later when I tried the e2e_cli example it failed when trying to install a chaincode, complaining about unexpected signal during runtime execution with a bunch of call stack traces. It was the 2nd or 3rd time during the trial and error process that I gave the warnings some serious thoughts, because some of the function names mentioned were similar to that in the call stack.
This is what I believe happened: the binaries were built on Ubuntu 14.04 with glibc 2.19, and the warning message suggested that the runtime environment should also be based on glibc 2.19, which is not the case since the baseimage was an Ubuntu 16.04 with glibc 2.23, hence the error and call stacks.
Figuring this all out I then upgraded my development machine to Ubuntu 16.04 (a command as simple as do-release-upgrade did it nicely), re-built the Fabric images and successfully ran through the e2e_cli example.
A side note: I believe upgrading glibc alone would also work, but there doesn't seem to be an easy way to do that except for building it from source, as Canonical doesn't ship in the 14.04 package repo a newer version.
Side note Number Two: I revisited my Fabric 1.0.0 environment and realized the baseimage v0.3.1, the official one for Fabric 1.0.0, was already an Ubuntu 16.04 with glibc 2.19, and I'd successfully run countless times of e2e_cli example with Fabric images built on Ubuntu 14.04. I'd only assume back in the 1.0.0 days Fabric code didn't call the incompatible functions. But now that we've learned this was a dangerous loophole.
Rationale -
Here by building Hyperledger Fabric, I mean calling make docker against the code and running through the e2e_cli example that demonstrates a proof-of-concept blockchain network. If you've done this before, you'll know that in order to do this, you'll need the following 2 base docker images that will be used to further build Fabric images like peer, orderer, etc.
root@tony2:/opt/gopath/src/github.com/hyperledger/fabric# docker images | grep base
hyperledger/fabric-baseimage x86_64-0.4.7 390ac2e95bc7 37 hours ago 1.41GB
hyperledger/fabric-baseos x86_64-0.4.7 c0e784934c4e 37 hours ago 152MB
I know that it's the 0.4.6 version base images that go with the official 1.1.0 release. I've just started migrating from 1.0.0 since yesterday, when the 0.4.7 version has rolled out. So I figured why not giving it a try? Simply modify BASEIMAGE_RELEASE in the Makefile should do the trick.
These images provide the runtime environment for the Fabric binary executables. When building these executables, there were actually a bunch of warnings that go something like:
Using 'getpwuid_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking.
I obviously didn't give enough attention to these warnings when I first saw them. And later when I tried the e2e_cli example it failed when trying to install a chaincode, complaining about unexpected signal during runtime execution with a bunch of call stack traces. It was the 2nd or 3rd time during the trial and error process that I gave the warnings some serious thoughts, because some of the function names mentioned were similar to that in the call stack.
This is what I believe happened: the binaries were built on Ubuntu 14.04 with glibc 2.19, and the warning message suggested that the runtime environment should also be based on glibc 2.19, which is not the case since the baseimage was an Ubuntu 16.04 with glibc 2.23, hence the error and call stacks.
Figuring this all out I then upgraded my development machine to Ubuntu 16.04 (a command as simple as do-release-upgrade did it nicely), re-built the Fabric images and successfully ran through the e2e_cli example.
A side note: I believe upgrading glibc alone would also work, but there doesn't seem to be an easy way to do that except for building it from source, as Canonical doesn't ship in the 14.04 package repo a newer version.
Side note Number Two: I revisited my Fabric 1.0.0 environment and realized the baseimage v0.3.1, the official one for Fabric 1.0.0, was already an Ubuntu 16.04 with glibc 2.19, and I'd successfully run countless times of e2e_cli example with Fabric images built on Ubuntu 14.04. I'd only assume back in the 1.0.0 days Fabric code didn't call the incompatible functions. But now that we've learned this was a dangerous loophole.
2018年2月2日星期五
Audit logs for GitLab
Managing a software company at some scale, you may want your source code management (SCM) system monitored and audited, for the sake of securing intellectual properties. We are using GitLab, which doesn't include such feature in its community edition. The reason is understandable - you will otherwise be considered a target customer for its paid, enterprise edition.
Being open source, however, the community edition could easily be improved, following the pages below. I haven't started trying the solution myself yet, and will update this post if I hit issues.
2018年1月14日星期日
The init business with docker
Readings:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
https://engineeringblog.yelp.com/2016/01/dumb-init-an-init-for-docker.html
P.S. Docker now has natively supported the init mechanism:
https://docs.docker.com/engine/reference/run/#specify-an-init-process
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
https://engineeringblog.yelp.com/2016/01/dumb-init-an-init-for-docker.html
P.S. Docker now has natively supported the init mechanism:
https://docs.docker.com/engine/reference/run/#specify-an-init-process
订阅:
评论 (Atom)