feat: initial commit

This commit is contained in:
amy 2025-04-01 17:40:03 +00:00
commit 38f495e3f4
457 changed files with 40577 additions and 0 deletions

View file

@ -0,0 +1,583 @@
<!-- markdownlint-disable MD024 -->
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org).
## [v10.2.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v10.2.0) - 2025-03-10
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v10.1.0...v10.2.0)
### Added
- (GH-962) Allow Amazon Linux 2 and newer versions [#972](https://github.com/puppetlabs/puppetlabs-docker/pull/972) ([rjd1](https://github.com/rjd1))
## [v10.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v10.1.0) - 2024-12-18
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v10.0.1...v10.1.0)
### Added
- Add support for EL9 [#1007](https://github.com/puppetlabs/puppetlabs-docker/pull/1007) ([ghoneycutt](https://github.com/ghoneycutt))
## [v10.0.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v10.0.1) - 2024-07-13
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v10.0.0...v10.0.1)
### Fixed
- systemd service overrides: restore whitespace to pre-conversion [#983](https://github.com/puppetlabs/puppetlabs-docker/pull/983) ([kenyon](https://github.com/kenyon))
## [v10.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v10.0.0) - 2024-07-04
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v9.1.0...v10.0.0)
### Changed
- Use the docker-compose-plugin via 'docker compose' instead of 'docker-compose' [#975](https://github.com/puppetlabs/puppetlabs-docker/pull/975) ([nathanlcarlson](https://github.com/nathanlcarlson))
### Fixed
- (CAT-1143)-Conversion_of_erb_templates_to_epp [#944](https://github.com/puppetlabs/puppetlabs-docker/pull/944) ([praj1001](https://github.com/praj1001))
## [v9.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v9.1.0) - 2023-07-19
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v9.0.1...v9.1.0)
### Added
- CONT-568 : Adding deferred function for password [#918](https://github.com/puppetlabs/puppetlabs-docker/pull/918) ([malikparvez](https://github.com/malikparvez))
## [v9.0.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v9.0.1) - 2023-07-06
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v8.0.0...v9.0.1)
### Fixed
- (CONT-1196) - Remove deprecated function merge [#935](https://github.com/puppetlabs/puppetlabs-docker/pull/935) ([Ramesh7](https://github.com/Ramesh7))
## [v8.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v8.0.0) - 2023-07-05
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v7.0.0...v8.0.0)
### Changed
- pdksync - (MAINT) - Require Stdlib 9.x [#921](https://github.com/puppetlabs/puppetlabs-docker/pull/921) ([LukasAud](https://github.com/LukasAud))
### Added
- (CONT-1121) - Add support for CentOS 8 [#926](https://github.com/puppetlabs/puppetlabs-docker/pull/926) ([jordanbreen28](https://github.com/jordanbreen28))
## [v7.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v7.0.0) - 2023-05-02
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v6.1.0...v7.0.0)
### Changed
- (CONT-776) - Add Puppet 8/Drop Puppet 6 [#910](https://github.com/puppetlabs/puppetlabs-docker/pull/910) ([jordanbreen28](https://github.com/jordanbreen28))
## [v6.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v6.1.0) - 2023-04-28
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v6.0.2...v6.1.0)
### Added
- (CONT-351) Syntax update [#901](https://github.com/puppetlabs/puppetlabs-docker/pull/901) ([LukasAud](https://github.com/LukasAud))
### Fixed
- Fix `docker` fact with recent version of docker [#897](https://github.com/puppetlabs/puppetlabs-docker/pull/897) ([smortex](https://github.com/smortex))
- Use puppet yaml helper to workaround psych >4 breaking changes [#877](https://github.com/puppetlabs/puppetlabs-docker/pull/877) ([gfargeas](https://github.com/gfargeas))
## [v6.0.2](https://github.com/puppetlabs/puppetlabs-docker/tree/v6.0.2) - 2022-12-08
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v6.0.1...v6.0.2)
### Fixed
- (CONT-24) docker_stack always redoploying [#878](https://github.com/puppetlabs/puppetlabs-docker/pull/878) ([david22swan](https://github.com/david22swan))
## [v6.0.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v6.0.1) - 2022-11-25
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v6.0.0...v6.0.1)
### Fixed
- Revert "(maint) Hardening manifests and tasks" [#875](https://github.com/puppetlabs/puppetlabs-docker/pull/875) ([pmcmaw](https://github.com/pmcmaw))
## [v6.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v6.0.0) - 2022-11-21
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v5.1.0...v6.0.0)
### Changed
- (CONT-263) Bumping required puppet version [#871](https://github.com/puppetlabs/puppetlabs-docker/pull/871) ([LukasAud](https://github.com/LukasAud))
- docker_run_flags: Shellescape any provided values [#869](https://github.com/puppetlabs/puppetlabs-docker/pull/869) ([LukasAud](https://github.com/LukasAud))
- (maint) Hardening manifests and tasks [#863](https://github.com/puppetlabs/puppetlabs-docker/pull/863) ([LukasAud](https://github.com/LukasAud))
## [v5.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v5.1.0) - 2022-10-21
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v5.0.0...v5.1.0)
### Added
- Add missing extra_systemd_parameters values to docker-run.erb [#851](https://github.com/puppetlabs/puppetlabs-docker/pull/851) ([cbowman0](https://github.com/cbowman0))
### Fixed
- pdksync - (CONT-130) Dropping Support for Debian 9 [#859](https://github.com/puppetlabs/puppetlabs-docker/pull/859) ([jordanbreen28](https://github.com/jordanbreen28))
- Change `stop_wait_time` value to match Docker default [#858](https://github.com/puppetlabs/puppetlabs-docker/pull/858) ([sebcdri](https://github.com/sebcdri))
## [v5.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v5.0.0) - 2022-08-19
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.4.0...v5.0.0)
### Changed
- Remove log_driver limitations [#792](https://github.com/puppetlabs/puppetlabs-docker/pull/792) ([timdeluxe](https://github.com/timdeluxe))
### Added
- pdksync - (GH-cat-11) Certify Support for Ubuntu 22.04 [#850](https://github.com/puppetlabs/puppetlabs-docker/pull/850) ([david22swan](https://github.com/david22swan))
- adding optional variable for package_key_check_source to RedHat [#846](https://github.com/puppetlabs/puppetlabs-docker/pull/846) ([STaegtmeier](https://github.com/STaegtmeier))
- New create_user parameter on main class [#841](https://github.com/puppetlabs/puppetlabs-docker/pull/841) ([traylenator](https://github.com/traylenator))
## [v4.4.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.4.0) - 2022-06-01
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.3.0...v4.4.0)
### Added
- Update Docker Compose to handle symbols [#834](https://github.com/puppetlabs/puppetlabs-docker/pull/834) ([sili72](https://github.com/sili72))
### Fixed
- Avoid empty array to -net parameter [#837](https://github.com/puppetlabs/puppetlabs-docker/pull/837) ([chelnak](https://github.com/chelnak))
- (GH-785) Fix duplicate stack matching [#836](https://github.com/puppetlabs/puppetlabs-docker/pull/836) ([chelnak](https://github.com/chelnak))
- Fix docker-compose, network and volumes not applying on 1st run, fix other idempotency [#833](https://github.com/puppetlabs/puppetlabs-docker/pull/833) ([canihavethisone](https://github.com/canihavethisone))
- Fixed docker facts to check for active swarm clusters before running docker swarm sub-commands. [#817](https://github.com/puppetlabs/puppetlabs-docker/pull/817) ([nmaludy](https://github.com/nmaludy))
## [v4.3.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.3.0) - 2022-05-16
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.2.1...v4.3.0)
### Added
- Add tmpdir option to docker_compose [#823](https://github.com/puppetlabs/puppetlabs-docker/pull/823) ([canihavethisone](https://github.com/canihavethisone))
- Support different Architectures (like `aarch64`) when installing Compose [#812](https://github.com/puppetlabs/puppetlabs-docker/pull/812) ([mpdude](https://github.com/mpdude))
### Fixed
- Only install docker-ce-cli with docker-ce [#827](https://github.com/puppetlabs/puppetlabs-docker/pull/827) ([vchepkov](https://github.com/vchepkov))
- remove some legacy facts [#802](https://github.com/puppetlabs/puppetlabs-docker/pull/802) ([traylenator](https://github.com/traylenator))
- Fix missing comma in docker::image example [#787](https://github.com/puppetlabs/puppetlabs-docker/pull/787) ([Vincevrp](https://github.com/Vincevrp))
- allow docker::networks::networks param to be undef [#783](https://github.com/puppetlabs/puppetlabs-docker/pull/783) ([jhoblitt](https://github.com/jhoblitt))
## [v4.2.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.2.1) - 2022-04-14
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.2.0...v4.2.1)
### Fixed
- Fix permission denied issue introduced in v4.2.0 [#820](https://github.com/puppetlabs/puppetlabs-docker/pull/820) ([chelnak](https://github.com/chelnak))
## [v4.2.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.2.0) - 2022-04-11
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.1.2...v4.2.0)
### Added
- pdksync - (FM-8922) - Add Support for Windows 2022 [#801](https://github.com/puppetlabs/puppetlabs-docker/pull/801) ([david22swan](https://github.com/david22swan))
- (IAC-1729) Add Support for Debian 11 [#799](https://github.com/puppetlabs/puppetlabs-docker/pull/799) ([david22swan](https://github.com/david22swan))
### Fixed
- pdksync - (GH-iac-334) Remove Support for Ubuntu 14.04/16.04 [#807](https://github.com/puppetlabs/puppetlabs-docker/pull/807) ([david22swan](https://github.com/david22swan))
- Fix idempotency when using scaling with docker-compose [#805](https://github.com/puppetlabs/puppetlabs-docker/pull/805) ([canihavethisone](https://github.com/canihavethisone))
- Make RedHat version check respect acknowledge_unsupported_os [#788](https://github.com/puppetlabs/puppetlabs-docker/pull/788) ([PolaricEntropy](https://github.com/PolaricEntropy))
## [v4.1.2](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.1.2) - 2021-09-27
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.1.1...v4.1.2)
### Fixed
- pdksync - (IAC-1598) - Remove Support for Debian 8 [#775](https://github.com/puppetlabs/puppetlabs-docker/pull/775) ([david22swan](https://github.com/david22swan))
- Prefer timeout to time_limit for Facter::Core::Execution [#774](https://github.com/puppetlabs/puppetlabs-docker/pull/774) ([smortex](https://github.com/smortex))
- Fix facts gathering [#773](https://github.com/puppetlabs/puppetlabs-docker/pull/773) ([smortex](https://github.com/smortex))
## [v4.1.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.1.1) - 2021-08-26
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.1.0...v4.1.1)
### Fixed
- (IAC-1741) Allow stdlib v8.0.0 [#767](https://github.com/puppetlabs/puppetlabs-docker/pull/767) ([david22swan](https://github.com/david22swan))
- Remove stderr empty check to avoid docker_params_changed failures when warnings appear [#764](https://github.com/puppetlabs/puppetlabs-docker/pull/764) ([cedws](https://github.com/cedws))
- Duplicate declaration statement: docker_params_changed is already declared [#763](https://github.com/puppetlabs/puppetlabs-docker/pull/763) ([basti-nis](https://github.com/basti-nis))
- Timeout for hangs of the docker_client in the facts generation [#759](https://github.com/puppetlabs/puppetlabs-docker/pull/759) ([carabasdaniel](https://github.com/carabasdaniel))
## [v4.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.1.0) - 2021-06-28
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.0.1...v4.1.0)
### Added
- Add syslog_facility parameter to docker::run [#755](https://github.com/puppetlabs/puppetlabs-docker/pull/755) ([waipeng](https://github.com/waipeng))
### Fixed
- Fix docker::volumes hiera example [#754](https://github.com/puppetlabs/puppetlabs-docker/pull/754) ([pskopnik](https://github.com/pskopnik))
- Allow force update non-latest tagged image [#752](https://github.com/puppetlabs/puppetlabs-docker/pull/752) ([yanjunding](https://github.com/yanjunding))
- Allow management of the docker-ce-cli package [#740](https://github.com/puppetlabs/puppetlabs-docker/pull/740) ([kenyon](https://github.com/kenyon))
## [v4.0.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.0.1) - 2021-05-26
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v4.0.0...v4.0.1)
### Fixed
- (IAC-1497) - Removal of unsupported `translate` dependency [#737](https://github.com/puppetlabs/puppetlabs-docker/pull/737) ([david22swan](https://github.com/david22swan))
- add simple quotes around env service flag [#706](https://github.com/puppetlabs/puppetlabs-docker/pull/706) ([adrianiurca](https://github.com/adrianiurca))
## [v4.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v4.0.0) - 2021-03-04
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.14.0...v4.0.0)
## [v3.14.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.14.0) - 2021-03-04
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.13.1...v3.14.0)
### Changed
- pdksync - Remove Puppet 5 from testing and bump minimal version to 6.0.0 [#718](https://github.com/puppetlabs/puppetlabs-docker/pull/718) ([carabasdaniel](https://github.com/carabasdaniel))
### Fixed
- [MODULES-10898] Disable forced docker service restart for RedHat 7 and docker server 1.13 [#730](https://github.com/puppetlabs/puppetlabs-docker/pull/730) ([carabasdaniel](https://github.com/carabasdaniel))
- Make it possible to use pod's network [#725](https://github.com/puppetlabs/puppetlabs-docker/pull/725) ([seriv](https://github.com/seriv))
## [v3.13.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.13.1) - 2021-02-02
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.13.0...v3.13.1)
### Fixed
- (IAC-1218) - docker_params_changed should be run by agent [#705](https://github.com/puppetlabs/puppetlabs-docker/pull/705) ([adrianiurca](https://github.com/adrianiurca))
- Fix systemd units for systemd versions < v230 [#704](https://github.com/puppetlabs/puppetlabs-docker/pull/704) ([benningm](https://github.com/benningm))
- setting HOME environment to /root [#698](https://github.com/puppetlabs/puppetlabs-docker/pull/698) ([adrianiurca](https://github.com/adrianiurca))
## [v3.13.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.13.0) - 2020-12-14
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.12.1...v3.13.0)
### Added
- pdksync - (feat) - Bump Puppet boundary [#687](https://github.com/puppetlabs/puppetlabs-docker/pull/687) ([daianamezdrea](https://github.com/daianamezdrea))
- Ensure image digest checksum before starting [#673](https://github.com/puppetlabs/puppetlabs-docker/pull/673) ([tmanninger](https://github.com/tmanninger))
- Support multiple mirrors #659 [#669](https://github.com/puppetlabs/puppetlabs-docker/pull/669) ([TheLocehiliosan](https://github.com/TheLocehiliosan))
### Fixed
- Options to docker-compose should be an Array, not a String [#695](https://github.com/puppetlabs/puppetlabs-docker/pull/695) ([adrianiurca](https://github.com/adrianiurca))
- fixing issue #689 by setting HOME in docker command [#692](https://github.com/puppetlabs/puppetlabs-docker/pull/692) ([sdinten](https://github.com/sdinten))
- (MAINT) Use docker-compose config instead file parsing [#672](https://github.com/puppetlabs/puppetlabs-docker/pull/672) ([rbelnap](https://github.com/rbelnap))
- Fix array of additional flags [#671](https://github.com/puppetlabs/puppetlabs-docker/pull/671) ([CAPSLOCK2000](https://github.com/CAPSLOCK2000))
- Test against OS family rather than name [#667](https://github.com/puppetlabs/puppetlabs-docker/pull/667) ([bodgit](https://github.com/bodgit))
## [v3.12.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.12.1) - 2020-10-14
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.12.0...v3.12.1)
### Fixed
- Fix misplaced backslash in start template [#666](https://github.com/puppetlabs/puppetlabs-docker/pull/666) ([optiz0r](https://github.com/optiz0r))
## [v3.12.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.12.0) - 2020-09-29
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.11.0...v3.12.0)
### Added
- Add docker swarm join-tokens as facts [#651](https://github.com/puppetlabs/puppetlabs-docker/pull/651) ([oschusler](https://github.com/oschusler))
### Fixed
- (IAC-982) - Remove inappropriate terminology [#654](https://github.com/puppetlabs/puppetlabs-docker/pull/654) ([david22swan](https://github.com/david22swan))
## [v3.11.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.11.0) - 2020-08-11
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.10.2...v3.11.0)
### Added
- Fix #584: Deal with Arrays for the net list [#647](https://github.com/puppetlabs/puppetlabs-docker/pull/647) ([MG2R](https://github.com/MG2R))
- pdksync - (IAC-973) - Update travis/appveyor to run on new default branch main [#643](https://github.com/puppetlabs/puppetlabs-docker/pull/643) ([david22swan](https://github.com/david22swan))
### Fixed
- [MODULES-10734] - improve params detection on docker::run [#648](https://github.com/puppetlabs/puppetlabs-docker/pull/648) ([adrianiurca](https://github.com/adrianiurca))
## [v3.10.2](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.10.2) - 2020-07-17
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.10.1...v3.10.2)
### Fixed
- (MODULES-10691) - Add root_dir in daemon.json [#632](https://github.com/puppetlabs/puppetlabs-docker/pull/632) ([daianamezdrea](https://github.com/daianamezdrea))
- Fixing the fix 'Fix the docker_compose options parameter position #378' [#631](https://github.com/puppetlabs/puppetlabs-docker/pull/631) ([awegmann](https://github.com/awegmann))
- Blocking ordering between non-Windows service stops [#622](https://github.com/puppetlabs/puppetlabs-docker/pull/622) ([AndrewLipscomb](https://github.com/AndrewLipscomb))
- Allow all 3.x docker-compose minor versions [#620](https://github.com/puppetlabs/puppetlabs-docker/pull/620) ([runejuhl](https://github.com/runejuhl))
## [v3.10.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.10.1) - 2020-05-28
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.10.0...v3.10.1)
### Fixed
- Fix unreachable StartLimitBurst value in unit template [#616](https://github.com/puppetlabs/puppetlabs-docker/pull/616) ([omeinderink](https://github.com/omeinderink))
- (MODULES-9696) remove docker_home_dirs fact [#613](https://github.com/puppetlabs/puppetlabs-docker/pull/613) ([carabasdaniel](https://github.com/carabasdaniel))
- [MODULES-10629] Throw error when docker login fails [#610](https://github.com/puppetlabs/puppetlabs-docker/pull/610) ([carabasdaniel](https://github.com/carabasdaniel))
- (maint) - facts fix for centos [#608](https://github.com/puppetlabs/puppetlabs-docker/pull/608) ([david22swan](https://github.com/david22swan))
- major adjustments for current code style [#607](https://github.com/puppetlabs/puppetlabs-docker/pull/607) ([crazymind1337](https://github.com/crazymind1337))
## [v3.10.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.10.0) - 2020-04-23
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.9.1...v3.10.0)
### Added
- [IAC-291] Convert acceptance tests to Litmus [#585](https://github.com/puppetlabs/puppetlabs-docker/pull/585) ([carabasdaniel](https://github.com/carabasdaniel))
- Updated: Add Docker service (create, remote, scale) tasks [#582](https://github.com/puppetlabs/puppetlabs-docker/pull/582) ([Flask](https://github.com/Flask))
- Add after_start and after_stop options to docker::run define [#580](https://github.com/puppetlabs/puppetlabs-docker/pull/580) ([jantman](https://github.com/jantman))
- Make docker::machine::url configurable [#569](https://github.com/puppetlabs/puppetlabs-docker/pull/569) ([baurmatt](https://github.com/baurmatt))
- Let docker.service start docker services managed by puppetlabs/docker… [#563](https://github.com/puppetlabs/puppetlabs-docker/pull/563) ([jhejl](https://github.com/jhejl))
- Allow bypassing curl package ensure if needed [#477](https://github.com/puppetlabs/puppetlabs-docker/pull/477) ([esalberg](https://github.com/esalberg))
### Fixed
- Enforce TLS1.2 on Windows; minor fixes for RH-based testing [#603](https://github.com/puppetlabs/puppetlabs-docker/pull/603) ([carabasdaniel](https://github.com/carabasdaniel))
- [MODULES-10628] Update documentation for docker volume and set options as parameter [#599](https://github.com/puppetlabs/puppetlabs-docker/pull/599) ([carabasdaniel](https://github.com/carabasdaniel))
- Allow module to work on SLES [#591](https://github.com/puppetlabs/puppetlabs-docker/pull/591) ([npwalker](https://github.com/npwalker))
- (maint) Fix missing stubs in docker_spec.rb [#589](https://github.com/puppetlabs/puppetlabs-docker/pull/589) ([Filipovici-Andrei](https://github.com/Filipovici-Andrei))
- Add Hiera lookups for resources in init.pp [#586](https://github.com/puppetlabs/puppetlabs-docker/pull/586) ([fe80](https://github.com/fe80))
- Use standardized quote type to help tests pass [#566](https://github.com/puppetlabs/puppetlabs-docker/pull/566) ([DLeich](https://github.com/DLeich))
- Minimal changes to work with podman-docker [#562](https://github.com/puppetlabs/puppetlabs-docker/pull/562) ([seriv](https://github.com/seriv))
## [v3.9.1](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.9.1) - 2020-01-20
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.9.0...v3.9.1)
### Fixed
- (maint) fix dependencies of powershell to 4.0.0 [#568](https://github.com/puppetlabs/puppetlabs-docker/pull/568) ([sheenaajay](https://github.com/sheenaajay))
## [v3.9.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.9.0) - 2019-12-09
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.8.0...v3.9.0)
### Added
- Add option for RemainAfterExit [#549](https://github.com/puppetlabs/puppetlabs-docker/pull/549) ([vdavidoff](https://github.com/vdavidoff))
### Fixed
- Fix error does not show when image:tag does not exists (#552) [#553](https://github.com/puppetlabs/puppetlabs-docker/pull/553) ([rafaelcarv](https://github.com/rafaelcarv))
- Allow defining the name of the docker-compose symlink [#544](https://github.com/puppetlabs/puppetlabs-docker/pull/544) ([gtufte](https://github.com/gtufte))
- Clarify usage of docker_stack type up_args and fix link to docs [#537](https://github.com/puppetlabs/puppetlabs-docker/pull/537) ([jacksgt](https://github.com/jacksgt))
- Move StartLimit* options to [Unit], fix StartLimitIntervalSec [#531](https://github.com/puppetlabs/puppetlabs-docker/pull/531) ([runejuhl](https://github.com/runejuhl))
## [v3.8.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.8.0) - 2019-10-01
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.7.0-bna...v3.8.0)
### Added
- pdksync - Add support on Debian10 [#525](https://github.com/puppetlabs/puppetlabs-docker/pull/525) ([lionce](https://github.com/lionce))
### Fixed
- Fix multiple additional flags for docker_network [#523](https://github.com/puppetlabs/puppetlabs-docker/pull/523) ([lemrouch](https://github.com/lemrouch))
- :bug: Fix wrong service detach handling [#520](https://github.com/puppetlabs/puppetlabs-docker/pull/520) ([khaefeli](https://github.com/khaefeli))
- Fix aliased plugin names [#514](https://github.com/puppetlabs/puppetlabs-docker/pull/514) ([koshatul](https://github.com/koshatul))
## [v3.7.0-bna](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.7.0-bna) - 2019-08-08
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/e2.6...v3.7.0-bna)
### Added
- Add new Docker Swarm Tasks (node ls, rm, update; service scale) [#509](https://github.com/puppetlabs/puppetlabs-docker/pull/509) ([khaefeli](https://github.com/khaefeli))
### Fixed
- Fixing error: [#516](https://github.com/puppetlabs/puppetlabs-docker/pull/516) ([darshannnn](https://github.com/darshannnn))
## [e2.6](https://github.com/puppetlabs/puppetlabs-docker/tree/e2.6) - 2019-07-26
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.7.0...e2.6)
## [v3.7.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.7.0) - 2019-07-19
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/v3.6.0...v3.7.0)
### Added
- Added option to override docker-compose download location [#482](https://github.com/puppetlabs/puppetlabs-docker/pull/482) ([piquet90](https://github.com/piquet90))
## [v3.6.0](https://github.com/puppetlabs/puppetlabs-docker/tree/v3.6.0) - 2019-06-25
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.5.0...v3.6.0)
### Changed
- (FM-8100) Update minimum supported Puppet version to 5.5.10 [#486](https://github.com/puppetlabs/puppetlabs-docker/pull/486) ([eimlav](https://github.com/eimlav))
### Added
- (FM-8151) Add Windows Server 2019 support [#493](https://github.com/puppetlabs/puppetlabs-docker/pull/493) ([eimlav](https://github.com/eimlav))
- Support for docker machine download and install [#466](https://github.com/puppetlabs/puppetlabs-docker/pull/466) ([acurus-puppetmaster](https://github.com/acurus-puppetmaster))
- Add service_provider parameter to docker::run [#376](https://github.com/puppetlabs/puppetlabs-docker/pull/376) ([jameslikeslinux](https://github.com/jameslikeslinux))
### Fixed
- Tasks frozen string [#499](https://github.com/puppetlabs/puppetlabs-docker/pull/499) ([khaefeli](https://github.com/khaefeli))
- Fix #239 local_user permission denied [#497](https://github.com/puppetlabs/puppetlabs-docker/pull/497) ([thde](https://github.com/thde))
- (MODULES-9193) Revert part of MODULES-9177 [#490](https://github.com/puppetlabs/puppetlabs-docker/pull/490) ([eimlav](https://github.com/eimlav))
- (MODULES-9177) Fix version validation regex [#489](https://github.com/puppetlabs/puppetlabs-docker/pull/489) ([eimlav](https://github.com/eimlav))
- Fix publish flag being erroneously added to docker service commands [#471](https://github.com/puppetlabs/puppetlabs-docker/pull/471) ([twistedduck](https://github.com/twistedduck))
- Fix container running check to work for windows hosts [#470](https://github.com/puppetlabs/puppetlabs-docker/pull/470) ([florindragos](https://github.com/florindragos))
- Allow images tagged latest to update on each run [#468](https://github.com/puppetlabs/puppetlabs-docker/pull/468) ([electrofelix](https://github.com/electrofelix))
- Fix docker::image to not run images [#465](https://github.com/puppetlabs/puppetlabs-docker/pull/465) ([hugotanure](https://github.com/hugotanure))
## [3.5.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.5.0) - 2019-03-14
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.4.0...3.5.0)
### Fixed
- fix(syntax): Remove duplicate parenthesis [#454](https://github.com/puppetlabs/puppetlabs-docker/pull/454) ([jfroche](https://github.com/jfroche))
- Docker::Services:: fix command parameter used with an array [#452](https://github.com/puppetlabs/puppetlabs-docker/pull/452) ([jacksgt](https://github.com/jacksgt))
- docker::services: Fix using multiple published ports [#447](https://github.com/puppetlabs/puppetlabs-docker/pull/447) ([jacksgt](https://github.com/jacksgt))
## [3.4.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.4.0) - 2019-02-25
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/e2.0...3.4.0)
## [e2.0](https://github.com/puppetlabs/puppetlabs-docker/tree/e2.0) - 2019-02-21
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.3.0...e2.0)
### Fixed
- fixing errors with bundle file conditional statement [#436](https://github.com/puppetlabs/puppetlabs-docker/pull/436) ([davejrt](https://github.com/davejrt))
- #432 Fix frozen string error [#434](https://github.com/puppetlabs/puppetlabs-docker/pull/434) ([khaefeli](https://github.com/khaefeli))
## [3.3.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.3.0) - 2019-02-13
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.2.0...3.3.0)
## [3.2.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.2.0) - 2019-01-17
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.1.0...3.2.0)
### Fixed
- centos repo fix [#413](https://github.com/puppetlabs/puppetlabs-docker/pull/413) ([davejrt](https://github.com/davejrt))
- rhel fix [#412](https://github.com/puppetlabs/puppetlabs-docker/pull/412) ([davejrt](https://github.com/davejrt))
- fixing various issues with tests across all supported OS [#406](https://github.com/puppetlabs/puppetlabs-docker/pull/406) ([davejrt](https://github.com/davejrt))
- Fix shared scripts on non systemd systems [#400](https://github.com/puppetlabs/puppetlabs-docker/pull/400) ([glorpen](https://github.com/glorpen))
- Do not load powershell profiles [#396](https://github.com/puppetlabs/puppetlabs-docker/pull/396) ([florindragos](https://github.com/florindragos))
- Fix stack acceptance tests [#395](https://github.com/puppetlabs/puppetlabs-docker/pull/395) ([florindragos](https://github.com/florindragos))
- fixing acceptance tests on debian [#393](https://github.com/puppetlabs/puppetlabs-docker/pull/393) ([davejrt](https://github.com/davejrt))
- fixing deep merge issue and yaml alias [#387](https://github.com/puppetlabs/puppetlabs-docker/pull/387) ([davejrt](https://github.com/davejrt))
- Adds a Usage example for daemon level extra_parameters [#386](https://github.com/puppetlabs/puppetlabs-docker/pull/386) ([mpepping](https://github.com/mpepping))
- Fixing create_resources for volumes [#384](https://github.com/puppetlabs/puppetlabs-docker/pull/384) ([andytechdad](https://github.com/andytechdad))
- Fix the docker_compose options parameter position [#378](https://github.com/puppetlabs/puppetlabs-docker/pull/378) ([FlorentPoinsaut](https://github.com/FlorentPoinsaut))
- Allow multiple values for subnet in docker_network [#371](https://github.com/puppetlabs/puppetlabs-docker/pull/371) ([florindragos](https://github.com/florindragos))
- Cloud 2191 fix stack acceptance test [#368](https://github.com/puppetlabs/puppetlabs-docker/pull/368) ([MWilsonPuppet](https://github.com/MWilsonPuppet))
- Create shared start/stop scripts for better extensibility [#367](https://github.com/puppetlabs/puppetlabs-docker/pull/367) ([glorpen](https://github.com/glorpen))
- Fixing incorrect variable names in docker_compose/ruby.rb [#365](https://github.com/puppetlabs/puppetlabs-docker/pull/365) ([lowerpuppet](https://github.com/lowerpuppet))
- update docker_stack to fix registry auth option [#364](https://github.com/puppetlabs/puppetlabs-docker/pull/364) ([davejrt](https://github.com/davejrt))
- Fix registry local_user functionalitity. [#353](https://github.com/puppetlabs/puppetlabs-docker/pull/353) ([stejanse](https://github.com/stejanse))
- Fix windows default paths [#326](https://github.com/puppetlabs/puppetlabs-docker/pull/326) ([florindragos](https://github.com/florindragos))
## [3.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.1.0) - 2018-10-22
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/3.0.0...3.1.0)
### Fixed
- Fix acceptance tests for windows [#349](https://github.com/puppetlabs/puppetlabs-docker/pull/349) ([florindragos](https://github.com/florindragos))
- pinning puppet version to fix failing spec tests [#346](https://github.com/puppetlabs/puppetlabs-docker/pull/346) ([MWilsonPuppet](https://github.com/MWilsonPuppet))
## [3.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/3.0.0) - 2018-09-27
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/2.0.0...3.0.0)
### Fixed
- Fix docker swarm tasks boolean params [#338](https://github.com/puppetlabs/puppetlabs-docker/pull/338) ([florindragos](https://github.com/florindragos))
- CLOUD-2069 Adding support for multiple compose files. [#332](https://github.com/puppetlabs/puppetlabs-docker/pull/332) ([MWilsonPuppet](https://github.com/MWilsonPuppet))
- fixes puppet run failures with no IPAM driver [#329](https://github.com/puppetlabs/puppetlabs-docker/pull/329) ([davejrt](https://github.com/davejrt))
- CLOUD-2078-Uninstall Docker on Linux [#328](https://github.com/puppetlabs/puppetlabs-docker/pull/328) ([MWilsonPuppet](https://github.com/MWilsonPuppet))
- Fix error messages from docker facts if docker not running [#325](https://github.com/puppetlabs/puppetlabs-docker/pull/325) ([stdietrich](https://github.com/stdietrich))
- Fix docker-compose provider to support images built on the fly [#320](https://github.com/puppetlabs/puppetlabs-docker/pull/320) ([florindragos](https://github.com/florindragos))
- fixing bug in upstart systems [#304](https://github.com/puppetlabs/puppetlabs-docker/pull/304) ([davejrt](https://github.com/davejrt))
- Fix for registry password not being inserted due to single quotes [#299](https://github.com/puppetlabs/puppetlabs-docker/pull/299) ([ConorPKeegan](https://github.com/ConorPKeegan))
- Fix docker registry idempotency and add windows acceptance tests [#298](https://github.com/puppetlabs/puppetlabs-docker/pull/298) ([florindragos](https://github.com/florindragos))
- Regex fix [#292](https://github.com/puppetlabs/puppetlabs-docker/pull/292) ([davejrt](https://github.com/davejrt))
## [2.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/2.0.0) - 2018-07-18
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.1.0...2.0.0)
### Fixed
- fixes restarting containers with changes to run arguments [#283](https://github.com/puppetlabs/puppetlabs-docker/pull/283) ([davejrt](https://github.com/davejrt))
- fix "start a container with cpuset" acceptance test on ubuntu1404 [#236](https://github.com/puppetlabs/puppetlabs-docker/pull/236) ([mihaibuzgau](https://github.com/mihaibuzgau))
## [1.1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/1.1.0) - 2018-03-16
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/e1.1...1.1.0)
### Fixed
- (maint)CLOUD-1768 Fixing incorrect cpuset flag #183 [#187](https://github.com/puppetlabs/puppetlabs-docker/pull/187) ([MWilsonPuppet](https://github.com/MWilsonPuppet))
## [e1.1](https://github.com/puppetlabs/puppetlabs-docker/tree/e1.1) - 2018-02-15
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.5...e1.1)
### Fixed
- fix typo [#149](https://github.com/puppetlabs/puppetlabs-docker/pull/149) ([seriv](https://github.com/seriv))
## [1.0.5](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.5) - 2018-01-31
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.4...1.0.5)
## [1.0.4](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.4) - 2018-01-03
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.3...1.0.4)
## [1.0.3](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.3) - 2018-01-03
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/e1.0...1.0.3)
## [e1.0](https://github.com/puppetlabs/puppetlabs-docker/tree/e1.0) - 2017-12-21
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.2...e1.0)
## [1.0.2](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.2) - 2017-11-17
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.1...1.0.2)
## [1.0.1](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.1) - 2017-10-15
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/1.0.0...1.0.1)
## [1.0.0](https://github.com/puppetlabs/puppetlabs-docker/tree/1.0.0) - 2017-10-11
[Full Changelog](https://github.com/puppetlabs/puppetlabs-docker/compare/790d08e2a1191db1b6c61f299d259fcd28cfa4e0...1.0.0)

View file

@ -0,0 +1,2 @@
# Setting ownership to the modules team
* @puppetlabs/modules

View file

@ -0,0 +1,3 @@
# Contributing to Puppet modules
Check out our [Contributing to Supported Modules Blog Post](https://puppetlabs.github.io/iac/docs/contributing_to_a_module.html) to find all the information that you will need.

View file

@ -0,0 +1,187 @@
554 Gareth Rushgrove
25 Kyle Anderson
14 Jo Vandeginste
14 Andrew Teixeira
11 Javier Bértoli
10 Justin Dray
10 Nikita Tarasov
10 Vahram Sukyas
9 Patrick Hemmer
9 rafael_chicoli
8 Jonathan Tripathy
8 n0coast
8 James Carr
8 pandrew
8 jfarrell
7 Lars van de Kerkhof
7 Lukas Waslowski
7 Jean-Francois Roche
7 Cornel Foltea
7 Elias Probst
6 Paul Morgan
6 Alex Hornung
6 Joshua Hoblitt
6 Tomas Doran
6 Scott Coulton
5 Casper Bruun
5 Camille Mougey
5 paschdan
5 Fredrik Thulin
4 Cristian Falcas
4 Janos Feher
4 scott coulton
4 Christophe Fonteyne
4 Clayton O'Neill
4 Vilmos Nebehaj
4 Brandon Rochon
4 Ben Langfeld
4 Frank Kleine
4 Bradley Cicenas
4 Wim Bonthuis
3 Edward Midolo
3 Greg Hardy
3 Jonathan Sokolowski
3 hcguersoy
3 Hylke Stapersma
3 James Edwards
3 Vikraman Choudhury
3 Brian Johnson
3 Thomas Krille
3 Bryan Jen
3 Terry Zink
3 Ryan Fowler
3 Rasmus Johansson
3 Rafael Chicoli
3 Daniel Platt
3 Mason Malone
3 Markus Frosch
3 Darren Coxall
3 Andrew Stangl
3 David Schmitt
3 Marji Cermak
2 Sam Grimee
2 Alex Crowe
2 Alexandre RAOUL
2 Benjamin Pineau
2 Bill Simon
2 Bob Potter
2 Caleb Tomlinson
2 Carles Amigó
2 Daniel Panteleit
2 David Danzilio
2 Dominic Becker
2 Hal Deadman
2 Hunter Haugen
2 Ilya Kalinin
2 Jo Vanvoorden
2 Joaquin
2 Josh Samuelson
2 Marc Schaer
2 Mickaël PERRIN
2 Nikita
2 Paul Otto
2 Reser, Ben
2 Rhommel Lamas
2 Ricardo Oliveira
2 Ricky Cook
2 Rob Terhaar
2 Salimane Adjao Moustapha
2 William Leese
2 Wouter Scheele
2 Zsolt Keseru
2 bcicen
2 coreone
2 krall
2 sebastian cole
1 Felix Bechstein
1 Justin Stoller
1 Kasumi Hanazuki
1 Keith Thornhill
1 Eugene Malihins
1 Elliot Huffman
1 Dylan Cochran
1 Maarten Claes
1 Aron Parsons
1 Mario Weigel
1 Dmitriy Myaskovskiy
1 Mark Kusch
1 Darragh Bailey
1 Martin Dietze
1 Martin Prebio
1 Daniel Werdermann
1 Michael Gorsuch
1 Michael Hackner
1 Michael Wells
1 Mick Pollard
1 Mickaël FORTUNATO
1 kasisnu
1 Mike Terzo
1 Nathan Flynn
1 Nathan R Valentine
1 Neil Parley
1 keith
1 Daniel Lawrence
1 Oriol Fitó
1 Daniel Klockenkämper
1 Daniel Holz
1 will vuong
1 Pierre Radermecker
1 Povilas Daukintis
1 Colin Hebert
1 Chris Wendt
1 ladoe00
1 mh
1 mujiburger
1 Andreas de Pretis
1 Alexander Dudko
1 Robin Westin Larsson
1 Chris Hoffman
1 Alex Elman
1 Adam Stephens
1 Sam Grimee (BDB)
1 Sam Weston
1 Saverio Proto
1 Chris Crewdson
1 Sean Sube
1 Tassilo Schweyer
1 Chadwick Banning
1 Bryan Belanger
1 Tim Bishop
1 Tim Hartmann
1 Tim Sharpe
1 Tom De Vylder
1 Tom Mast
1 Bruno Léon
1 Tomasz Tarczynski
1 Brandon Weeks
1 Vebjorn Ljosa
1 Brad Cowie
1 Benjamin Merot
1 Adriaan Peeters
1 Ben Ford
1 sauce@freenode
1 Adam Yohrling
1 andygodwin
1 willpayne
1 bob
1 James Abley
1 James Green
1 Jakub Husak
1 Huaqing Zheng
1 Harald Skoglund
1 Jing Dong
1 Hane, Jason
1 ssube
1 fluential
1 Joaquin Henriquez
1 Jonas Renggli
1 HIngst, Arne-Kristian
1 Grcic Ivan GEOINFO
1 Jos Houtman
1 Josef Johansson
1 Josh Brown
1 Arran Walker
1 Geoff Meakin
1 Joshua Spence
1 Justin Riley
1 Schusler, Olaf

View file

@ -0,0 +1,205 @@
# 3.5.0
Changes range for dependent modules
Use multiple networks in docker::run and docker::services
Fixes quotes with docker::services command
Publish multiple ports to docker::services
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/7?closed=1)
# 3.4.0
Introduces docker_stack type and provider
Fixes frozen string in docker swarm token task
Acceptance testing updates
Allow use of newer translate module
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/6?closed=1)
# Version 3.3.0
Pins apt repo to 500 to ensure packages are updated
Fixes issue in docker fact failing when docker is not started
Acceptance testing updates
Allows more recent version of the reboot module
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/5?closed=1)
# Version 3.2.0
Adds in support for Puppet 6
Containers will be restared due to script changes in [PR #367](https://github.com/puppetlabs/puppetlabs-docker/pull/367)
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/4?closed=1)
# Version 3.1.0
Adding in the following faetures/functionality
- Docker Stack support on Windows.
# Version 3.0.0
Various fixes for github issues
- 206
- 226
- 241
- 280
- 281
- 287
- 289
- 294
- 303
- 312
- 314
Adding in the following features/functionality
-Support for multiple compose files.
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/issues?q=is%3Aissue+milestone%3AV3.0.0+is%3Aclosed)
# Version 2.0.0
Various fixes for github issues
- 193
- 197
- 198
- 203
- 207
- 208
- 209
- 211
- 212
- 213
- 215
- 216
- 217
- 218
- 223
- 224
- 225
- 228
- 229
- 230
- 232
- 234
- 237
- 243
- 245
- 255
- 256
- 259
Adding in the following features/functionality
- Ability to define swarm clusters in Hiera.
- Support docker compose file V2.3.
- Support refresh only flag.
- Support for Docker healthcheck and unhealthy container restart.
- Support for Docker on Windows:
- Add docker ee support for windows server 2016.
- Docker image on Windows.
- Docker run on Windows.
- Docker compose on Windows.
- Docker swarm on Windows.
- Add docker exec functionality for docker on windows.
- Add storage driver for Windows.
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/2?closed=1)
# Version 1.1.0
Various fixes for Github issues
- 183
- 173
- 173
- 167
- 163
- 161
Adding in the following features/functionality
- IPv6 support
- Define type for docker plugins
A full list of issues and PRs associated with this release can be found [here](https://github.com/puppetlabs/puppetlabs-docker/milestone/1?closed=1)
# Version 1.0.5
Various fixes for Github issues
- 98
- 104
- 115
- 122
- 124
Adding in the following features/functionality
- Removed all unsupported OS related code from module
- Removed EPEL dependency
- Added http support in compose proxy
- Added in rubocop support and i18 gem support
- Type and provider for docker volumes
- Update apt module to latest
- Added in support for a registry mirror
- Facts for docker version and docker info
- Fixes for $pass_hash undef
- Fixed typo in param.pp
- Replaced deprecated stblib functions with data types
# Version 1.0.4
Correcting changelog
# Version 1.0.3
Various fixes for Github issues
- 33
- 68
- 74
- 77
- 84
Adding in the following features/functionality:
- Add tasks to update existing service
- Backwards compatible TMPDIR
- Optional GPG check on repos
- Force pull on image tag 'latest'
- Add support for overlay2.override_kernel_check setting
- Add docker network fact
- Add pw hash for registry login idompodency
- Additional flags for creating a network
- Fixing incorrect repo url for redhat
# Version 1.0.2
Various fixes for Github issues
- 9
- 11
- 15
- 21
Add tasks support for Docker Swarm
# Version 1.0.1
Updated metadata and CHANGELOG
# Version 1.0.0
Forked for garethr/docker v5.3.0
Added support for:
- Docker services within a swarm cluster
- Swarm mode
- Docker secrets

View file

@ -0,0 +1,207 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the
copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other
entities that control, are controlled by, or are under common control with
that entity. For the purposes of this definition, "control" means (i) the
power, direct or indirect, to cause the direction or management of such
entity, whether by contract or otherwise, or (ii) ownership of
fifty percent (50%) or more of the outstanding shares, or (iii) beneficial
ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation source,
and configuration files.
"Object" form shall mean any form resulting from mechanical transformation
or translation of a Source form, including but not limited to compiled
object code, generated documentation, and conversions to
other media types.
"Work" shall mean the work of authorship, whether in Source or Object
form, made available under the License, as indicated by a copyright notice
that is included in or attached to the work (an example is provided in the
Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form,
that is based on (or derived from) the Work and for which the editorial
revisions, annotations, elaborations, or other modifications represent,
as a whole, an original work of authorship. For the purposes of this
License, Derivative Works shall not include works that remain separable
from, or merely link (or bind by name) to the interfaces of, the Work and
Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or
Legal Entity authorized to submit on behalf of the copyright owner.
For the purposes of this definition, "submitted" means any form of
electronic, verbal, or written communication sent to the Licensor or its
representatives, including but not limited to communication on electronic
mailing lists, source code control systems, and issue tracking systems
that are managed by, or on behalf of, the Licensor for the purpose of
discussing and improving the Work, but excluding communication that is
conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor
hereby grants to You a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable copyright license to reproduce, prepare
Derivative Works of, publicly display, publicly perform, sublicense,
and distribute the Work and such Derivative Works in
Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor
hereby grants to You a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable (except as stated in this section) patent
license to make, have made, use, offer to sell, sell, import, and
otherwise transfer the Work, where such license applies only to those
patent claims licensable by such Contributor that are necessarily
infringed by their Contribution(s) alone or by combination of their
Contribution(s) with the Work to which such Contribution(s) was submitted.
If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or
contributory patent infringement, then any patent licenses granted to
You under this License for that Work shall terminate as of the date such
litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works
thereof in any medium, with or without modifications, and in Source or
Object form, provided that You meet the following conditions:
1. You must give any other recipients of the Work or Derivative Works a
copy of this License; and
2. You must cause any modified files to carry prominent notices stating
that You changed the files; and
3. You must retain, in the Source form of any Derivative Works that You
distribute, all copyright, patent, trademark, and attribution notices from
the Source form of the Work, excluding those notices that do not pertain
to any part of the Derivative Works; and
4. If the Work includes a "NOTICE" text file as part of its distribution,
then any Derivative Works that You distribute must include a readable copy
of the attribution notices contained within such NOTICE file, excluding
those notices that do not pertain to any part of the Derivative Works,
in at least one of the following places: within a NOTICE text file
distributed as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or, within a
display generated by the Derivative Works, if and wherever such
third-party notices normally appear. The contents of the NOTICE file are
for informational purposes only and do not modify the License.
You may add Your own attribution notices within Derivative Works that You
distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and may
provide additional or different license terms and conditions for use,
reproduction, or distribution of Your modifications, or for any such
Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions
stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally
submitted for inclusion in the Work by You to the Licensor shall be under
the terms and conditions of this License, without any additional
terms or conditions. Notwithstanding the above, nothing herein shall
supersede or modify the terms of any separate license agreement you may
have executed with Licensor regarding such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor
provides the Work (and each Contributor provides its Contributions)
on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied, including, without limitation, any warranties
or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS
FOR A PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any risks
associated with Your exercise of permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort
(including negligence), contract, or otherwise, unless required by
applicable law (such as deliberate and grossly negligent acts) or agreed
to in writing, shall any Contributor be liable to You for damages,
including any direct, indirect, special, incidental, or consequential
damages of any character arising as a result of this License or out of
the use or inability to use the Work (including but not limited to damages
for loss of goodwill, work stoppage, computer failure or malfunction,
or any and all other commercial damages or losses), even if such
Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose
to offer, and charge a fee for, acceptance of support, warranty,
indemnity, or other liability obligations and/or rights consistent with
this License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf of any
other Contributor, and only if You agree to indemnify, defend, and hold
each Contributor harmless for any liability incurred by, or claims
asserted against, such Contributor by reason of your accepting any such
warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included
on the same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2013 Gareth Rushgrove
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the License for the specific language governing
permissions and limitations under the License.

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
--- {}

View file

@ -0,0 +1,12 @@
# == Function: docker::sanitised_name
#
# Function to sanitise container name.
#
# === Parameters
#
# [*name*]
# Name to sanitise
#
function docker::sanitised_name($name) {
regsubst($name, '[^0-9A-Za-z.\-_]', '-', 'G')
}

View file

@ -0,0 +1,21 @@
---
version: 5
defaults: # Used for any hierarchy level that omits these keys.
datadir: data # This path is relative to hiera.yaml's directory.
data_hash: yaml_data # Use the built-in YAML backend.
hierarchy:
- name: "osfamily/major release"
paths:
# Used to distinguish between Debian and Ubuntu
- "os/%{facts.os.name}/%{facts.os.release.major}.yaml"
- "os/%{facts.os.family}/%{facts.os.release.major}.yaml"
# Used for Solaris
- "os/%{facts.os.family}/%{facts.kernelrelease}.yaml"
- name: "osfamily"
paths:
- "os/%{facts.os.name}.yaml"
- "os/%{facts.os.family}.yaml"
- name: 'common'
path: 'common.yaml'

View file

@ -0,0 +1,152 @@
# frozen_string_literal: true
require 'facter'
require 'json'
Facter.add(:docker_systemroot) do
confine osfamily: :windows
setcode do
Puppet::Util.get_env('SystemRoot')
end
end
Facter.add(:docker_program_files_path) do
confine osfamily: :windows
setcode do
Puppet::Util.get_env('ProgramFiles')
end
end
Facter.add(:docker_program_data_path) do
confine osfamily: :windows
setcode do
Puppet::Util.get_env('ProgramData')
end
end
Facter.add(:docker_user_temp_path) do
confine osfamily: :windows
setcode do
Puppet::Util.get_env('TEMP')
end
end
docker_command = if Facter.value(:kernel) == 'windows'
'powershell -NoProfile -NonInteractive -NoLogo -ExecutionPolicy Bypass -c docker'
else
'docker'
end
def interfaces
Facter.value(:interfaces).split(',')
end
Facter.add(:docker_version) do
confine { Facter::Core::Execution.which('docker') }
setcode do
value = Facter::Core::Execution.execute(
"#{docker_command} version --format '{{json .}}'", timeout: 90
)
JSON.parse(value)
end
end
Facter.add(:docker_client_version) do
setcode do
docker_version = Facter.value(:docker_version)
if docker_version
if docker_version['Client'].nil?
docker_version['Version']
else
docker_version['Client']['Version']
end
end
end
end
Facter.add(:docker_server_version) do
setcode do
docker_version = Facter.value(:docker_version)
if docker_version && !docker_version['Server'].nil? && docker_version['Server'].is_a?(Hash)
docker_version['Server']['Version']
else
nil
end
end
end
Facter.add(:docker_worker_join_token) do
confine { Facter::Core::Execution.which('docker') }
setcode do
# only run `docker swarm` commands if this node is in active in a cluster
docker_json_str = Facter::Core::Execution.execute(
"#{docker_command} info --format '{{json .}}'", timeout: 90
)
begin
docker = JSON.parse(docker_json_str)
if docker.fetch('Swarm', {})['LocalNodeState'] == 'active'
val = Facter::Core::Execution.execute(
"#{docker_command} swarm join-token worker -q", timeout: 90
)
end
rescue JSON::ParserError
nil
end
val
end
end
Facter.add(:docker_manager_join_token) do
confine { Facter::Core::Execution.which('docker') }
setcode do
# only run `docker swarm` commands if this node is in active in a cluster
docker_json_str = Facter::Core::Execution.execute(
"#{docker_command} info --format '{{json .}}'", timeout: 90
)
begin
docker = JSON.parse(docker_json_str)
if docker.fetch('Swarm', {})['LocalNodeState'] == 'active'
val = Facter::Core::Execution.execute(
"#{docker_command} swarm join-token manager -q", timeout: 90
)
end
rescue JSON::ParserError
nil
end
val
end
end
Facter.add(:docker) do
confine { Facter::Core::Execution.which('docker') }
setcode do
docker_version = Facter.value(:docker_client_version)
if docker_version&.match?(%r{\A(1\.1[3-9]|[2-9]|\d{2,})\.})
docker_json_str = Facter::Core::Execution.execute(
"#{docker_command} info --format '{{json .}}'", timeout: 90
)
begin
docker = JSON.parse(docker_json_str)
docker['network'] = {}
docker['network']['managed_interfaces'] = {}
network_list = Facter::Core::Execution.execute("#{docker_command} network ls | tail -n +2", timeout: 90)
docker_network_names = []
network_list.each_line { |line| docker_network_names.push line.split[1] }
docker_network_ids = []
network_list.each_line { |line| docker_network_ids.push line.split[0] }
docker_network_names.each do |network|
inspect = JSON.parse(Facter::Core::Execution.execute("#{docker_command} network inspect #{network}", timeout: 90))
docker['network'][network] = inspect[0]
network_id = docker['network'][network]['Id'][0..11]
interfaces.each do |iface|
docker['network']['managed_interfaces'][iface] = network if %r{#{network_id}}.match?(iface)
end
end
docker
rescue JSON::ParserError
nil
end
end
end
end

View file

@ -0,0 +1,12 @@
# frozen_string_literal: true
Puppet::Functions.create_function(:'docker::env') do
dispatch :env do
param 'Array', :args
return_type 'Array'
end
def env(args)
args
end
end

View file

@ -0,0 +1,155 @@
# frozen_string_literal: true
Puppet::Functions.create_function(:docker_params_changed) do
dispatch :detect_changes do
param 'Hash', :opts
return_type 'String'
end
def run_with_powershell(cmd)
"powershell.exe -Command \"& {#{cmd}}\" "
end
def remove_cidfile(cidfile, osfamily)
delete_command = if osfamily == 'windows'
run_with_powershell("del #{cidfile}")
else
"rm -f #{cidfile}"
end
_stdout, _stderr, _status = Open3.capture3(delete_command)
end
def start_container(name, osfamily)
start_command = if osfamily == 'windows'
run_with_powershell("docker start #{name}")
else
"docker start #{name}"
end
_stdout, _stderr, _status = Open3.capture3(start_command)
end
def stop_container(name, osfamily)
stop_command = if osfamily == 'windows'
run_with_powershell("docker stop #{name}")
else
"docker stop #{name}"
end
_stdout, _stderr, _status = Open3.capture3(stop_command)
end
def remove_container(name, osfamily, stop_wait_time, cidfile)
stop_command = if osfamily == 'windows'
run_with_powershell("docker stop --time=#{stop_wait_time} #{name}")
else
"docker stop --time=#{stop_wait_time} #{name}"
end
_stdout, _stderr, _status = Open3.capture3(stop_command)
remove_command = if osfamily == 'windows'
run_with_powershell("docker rm -v #{name}")
else
"docker rm -v #{name}"
end
_stdout, _stderr, _status = Open3.capture3(remove_command)
remove_cidfile(cidfile, osfamily)
end
def create_container(cmd, osfamily, image)
pull_command = if osfamily == 'windows'
run_with_powershell("docker pull #{image} -q")
else
"docker pull #{image} -q"
end
_stdout, _stderr, _status = Open3.capture3(pull_command)
create_command = if osfamily == 'windows'
run_with_powershell(cmd)
else
cmd
end
_stdout, _stderr, _status = Open3.capture3(create_command)
end
def detect_changes(opts)
require 'open3'
require 'json'
return_value = 'No changes detected'
if opts['sanitised_title'] && opts['osfamily']
stdout, _stderr, status = Open3.capture3("docker inspect #{opts['sanitised_title']}")
if status.to_s.include?('exit 0')
param_changed = false
inspect_hash = JSON.parse(stdout)[0]
# check if the image was changed
param_changed = true if opts['image'] && opts['image'] != inspect_hash['Config']['Image']
# check if something on volumes or mounts was changed(a new volume/mount was added or removed)
param_changed = true if opts['volumes'].is_a?(String) && opts['volumes'].include?(':') && opts['volumes'] != inspect_hash['Mounts'].to_a[0] && opts['osfamily'] != 'windows'
param_changed = true if opts['volumes'].is_a?(String) && !opts['volumes'].include?(':') && opts['volumes'] != inspect_hash['Config']['Volumes'].to_a[0] && opts['osfamily'] != 'windows'
param_changed = true if opts['volumes'].is_a?(String) && opts['volumes'].scan(%r{(?=:)}).count == 2 && opts['volumes'] != inspect_hash['Mounts'].to_a[0] && opts['osfamily'] == 'windows'
if opts['volumes'].is_a?(String) && opts['volumes'].scan(%r{(?=:)}).count == 1 && opts['volumes'] != inspect_hash['Config']['Volumes'].to_a[0] && opts['osfamily'] == 'windows'
param_changed = true
end
pp_paths = opts['volumes'].reject { |item| item.include?(':') } if opts['volumes'].is_a?(Array) && opts['osfamily'] != 'windows'
pp_mounts = opts['volumes'].select { |item| item.include?(':') } if opts['volumes'].is_a?(Array) && opts['osfamily'] != 'windows'
pp_paths = opts['volumes'].select { |item| item.scan(%r{(?=:)}).count == 1 } if opts['volumes'].is_a?(Array) && opts['osfamily'] == 'windows'
pp_mounts = opts['volumes'].select { |item| item.scan(%r{(?=:)}).count == 2 } if opts['volumes'].is_a?(Array) && opts['osfamily'] == 'windows'
inspect_paths = if inspect_hash['Config']['Volumes']
inspect_hash['Config']['Volumes'].keys
else
[]
end
param_changed = true if pp_paths != inspect_paths
names = inspect_hash['Mounts'].map { |item| item.values[1] } if inspect_hash['Mounts']
pp_names = pp_mounts.map { |item| item.split(':')[0] } if pp_mounts
names = names.select { |item| pp_names.include?(item) } if names && pp_names
destinations = inspect_hash['Mounts'].map { |item| item.values[3] } if inspect_hash['Mounts']
pp_destinations = pp_mounts.map { |item| item.split(':')[1] } if pp_mounts && opts['osfamily'] != 'windows'
pp_destinations = pp_mounts.map { |item| "#{item.split(':')[1].downcase}:#{item.split(':')[2]}" } if pp_mounts && opts['osfamily'] == 'windows'
destinations = destinations.select { |item| pp_destinations.include?(item) } if destinations && pp_destinations
param_changed = true if pp_names != names
param_changed = true if pp_destinations != destinations
param_changed = true if pp_mounts != [] && inspect_hash['Mounts'].nil?
# check if something on ports was changed(some ports were added or removed)
ports = inspect_hash['HostConfig']['PortBindings'].keys
ports = ports.map { |item| item.split('/')[0] }
pp_ports = opts['ports'].sort if opts['ports'].is_a?(Array)
pp_ports = [opts['ports']] if opts['ports'].is_a?(String)
param_changed = true if pp_ports && pp_ports != ports
if param_changed
remove_container(opts['sanitised_title'], opts['osfamily'], opts['stop_wait_time'], opts['cidfile'])
create_container(opts['command'], opts['osfamily'], opts['image'])
return_value = 'Param changed'
end
else
create_container(opts['command'], opts['osfamily'], opts['image']) unless File.exist?(opts['cidfile'])
_stdout, _stderr, status = Open3.capture3("docker inspect #{opts['sanitised_title']}")
unless status.to_s.include?('exit 0')
remove_cidfile(opts['cidfile'], opts['osfamily'])
create_container(opts['command'], opts['osfamily'], opts['image'])
end
return_value = 'No changes detected'
end
else
return_value = 'Arg required missing'
end
if opts['container_running']
start_container(opts['sanitised_title'], opts['osfamily'])
else
stop_container(opts['sanitised_title'], opts['osfamily'])
end
return_value
end
end

View file

@ -0,0 +1,25 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_exec_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker exec flags
newfunction(:docker_exec_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << '--detach=true' if opts['detach']
flags << '--interactive=true' if opts['interactive']
flags << '--tty=true' if opts['tty']
opts['env']&.each do |namevaluepair|
flags << "--env #{namevaluepair}"
end
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,20 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_plugin_remove_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker plugin remove flags
newfunction(:docker_plugin_enable_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << '--force' if opts['force_remove'] == true
if opts['plugin_alias'] && opts['plugin_alias'].to_s != 'undef'
flags << "'#{opts['plugin_alias']}'"
elsif opts['plugin_name'] && opts['plugin_name'].to_s != 'undef'
flags << "'#{opts['plugin_name']}'"
end
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,24 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_plugin_install_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker plugin install flags
newfunction(:docker_plugin_install_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << "--alias #{opts['plugin_alias']}" if opts['plugin_alias'] && opts['plugin_alias'].to_s != 'undef'
flags << '--disable' if opts['disable_on_install'] == true
flags << '--disable-content-trust' if opts['disable_content_trust'] == true
flags << '--grant-all-permissions' if opts['grant_all_permissions'] == true
flags << "'#{opts['plugin_name']}'" if opts['plugin_name'] && opts['plugin_name'].to_s != 'undef'
if opts['settings'].is_a? Array
opts['settings'].each do |setting|
flags << setting.to_s
end
end
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,16 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_plugin_remove_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker plugin remove flags
newfunction(:docker_plugin_remove_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << '--force' if opts['force_remove'] == true
flags << "'#{opts['plugin_name']}'" if opts['plugin_name'] && opts['plugin_name'].to_s != 'undef'
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,96 @@
# frozen_string_literal: true
#
# docker_run_flags.rb
#
module Puppet::Parser::Functions
newfunction(:'docker::escape', type: :rvalue) do |args|
subject = args[0]
escape_function = if self['facts'] && self['facts']['os']['family'] == 'windows'
'stdlib::powershell_escape'
else
'stdlib::shell_escape'
end
call_function(escape_function, subject)
end
# Transforms a hash into a string of docker flags
newfunction(:docker_run_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << "-u #{call_function('docker::escape', [opts['username']])}" if opts['username']
flags << "-h #{call_function('docker::escape', [opts['hostname']])}" if opts['hostname']
flags << "--restart '#{opts['restart']}'" if opts['restart']
if opts['net']
if opts['net'].is_a? String
flags << "--net #{call_function('docker::escape', [opts['net']])}"
elsif opts['net'].is_a? Array
flags += opts['net'].map { |item| ["--net #{call_function('docker::escape', [item])}"] }
end
end
flags << "-m #{opts['memory_limit']}" if opts['memory_limit']
cpusets = [opts['cpuset']].flatten.compact
unless cpusets.empty?
value = cpusets.join(',')
flags << "--cpuset-cpus=#{value}"
end
flags << '-n false' if opts['disable_network']
flags << '--privileged' if opts['privileged']
flags << "--health-cmd='#{opts['health_check_cmd']}'" if opts['health_check_cmd'] && opts['health_check_cmd'].to_s != 'undef'
flags << "--health-interval=#{opts['health_check_interval']}s" if opts['health_check_interval'] && opts['health_check_interval'].to_s != 'undef'
flags << '-t' if opts['tty']
flags << '--read-only=true' if opts['read_only']
params_join_char = if opts['osfamily'] && opts['osfamily'].to_s != 'undef'
opts['osfamily'].casecmp('windows').zero? ? " `\n" : " \\\n"
else
" \\\n"
end
multi_flags = ->(values, fmt) {
filtered = [values].flatten.compact
filtered.map { |val| (fmt + params_join_char) % call_function('docker::escape', [val]) }
}
[
['--dns %s', 'dns'],
['--dns-search %s', 'dns_search'],
['--expose=%s', 'expose'],
['--link %s', 'links'],
['--lxc-conf=%s', 'lxc_conf'],
['--volumes-from %s', 'volumes_from'],
['-e %s', 'env'],
['--env-file %s', 'env_file'],
['-p %s', 'ports'],
['-l %s', 'labels'],
['--add-host %s', 'hostentries'],
['-v %s', 'volumes'],
].each do |(format, key)|
values = opts[key]
new_flags = multi_flags.call(values, format)
flags.concat(new_flags)
end
opts['extra_params'].each do |param|
flags << param
end
# Some software (inc systemd) will truncate very long lines using glibc's
# max line length. Wrap options across multiple lines with '\' to avoid
flags.flatten.join(params_join_char)
end
end

View file

@ -0,0 +1,33 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_secrets_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker swarm init flags
newfunction(:docker_secrets_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << 'create' if opts['ensure'].to_s == 'present'
flags << "'#{opts['secret_name']}'" if opts['secret_name'] && opts['secret_name'].to_s != 'undef'
flags << "'#{opts['secret_path']}'" if opts['secret_path'] && opts['secret_path'].to_s != 'undef'
multi_flags = ->(values, format) {
filtered = [values].flatten.compact
filtered.map { |val| format + (" \\\n" % val) }
}
[
['-l %s', 'label'],
].each do |(format, key)|
values = opts[key]
new_flags = multi_flags.call(values, format)
flags.concat(new_flags)
end
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,83 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_service_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker swarm init flags
newfunction(:docker_service_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << "'#{opts['service_name']}'" if opts['service_name'] && opts['service_name'].to_s != 'undef'
flags << '--detach' if opts['detach'].to_s != 'false'
if opts['env'].is_a? Array
opts['env'].each do |env|
flags << "--env '#{env}'"
end
end
if opts['label'].is_a? Array
opts['label'].each do |label|
flags << "--label #{label}"
end
end
if opts['mounts'].is_a? Array
opts['mounts'].each do |mount|
flags << "--mount #{mount}"
end
end
if opts['networks'].is_a? Array
opts['networks'].each do |network|
flags << "--network #{network}"
end
end
if opts['publish'].is_a? Array
opts['publish'].each do |port|
flags << "--publish #{port}"
end
elsif opts['publish'] && opts['publish'].to_s != 'undef'
flags << "--publish '#{opts['publish']}'"
end
flags << "--replicas '#{opts['replicas']}'" if opts['replicas'] && opts['replicas'].to_s != 'undef'
flags << '--tty' if opts['tty'].to_s != 'false'
flags << "--user '#{opts['user']}'" if opts['user'] && opts['user'].to_s != 'undef'
flags << "--workdir '#{opts['workdir']}'" if opts['workdir'] && opts['workdir'].to_s != 'undef'
if opts['extra_params'].is_a? Array
opts['extra_params'].each do |param|
flags << param
end
end
flags << "-H '#{opts['host_socket']}'" if opts['host_socket'] && opts['host_socket'].to_s != 'undef'
if opts['registry_mirror'].is_a? Array
opts['registry_mirror'].each do |param|
flags << "--registry-mirror='#{param}'"
end
elsif opts['registry_mirror'] && opts['registry_mirror'].to_s != 'undef'
flags << "--registry-mirror='#{opts['registry_mirror']}'"
end
flags << "'#{opts['image']}'" if opts['image'] && opts['image'].to_s != 'undef'
if opts['command'].is_a? Array
flags << opts['command'].join(' ')
elsif opts['command'] && opts['command'].to_s != 'undef'
flags << opts['command'].to_s
end
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,29 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_stack_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker stack flags
newfunction(:docker_stack_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << "--bundle-file '#{opts['bundle_file']}'" if opts['bundle_file'] && opts['bundle_file'].to_s != 'undef'
if opts['compose_files'] && opts['compose_files'].to_s != 'undef'
opts['compose_files'].each do |file|
flags << "--compose-file '#{file}'"
end
end
flags << "--resolve-image '#{opts['resolve_image']}'" if opts['resolve_image'] && opts['resolve_image'].to_s != 'undef'
flags << '--prune' if opts['prune'] && opts['prune'].to_s != 'undef'
flags << '--with-registry-auth' if opts['with_registry_auth'] && opts['with_registry_auth'].to_s != 'undef'
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,43 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_swarm_init_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker swarm init flags
newfunction(:docker_swarm_init_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << 'init' if opts['init'].to_s != 'false'
flags << "--advertise-addr '#{opts['advertise_addr']}'" if opts['advertise_addr'] && opts['advertise_addr'].to_s != 'undef'
flags << '--autolock' if opts['autolock'].to_s != 'false'
flags << "--cert-expiry '#{opts['cert_expiry']}'" if opts['cert_expiry'] && opts['cert_expiry'].to_s != 'undef'
if opts['default_addr_pool'].is_a? Array
opts['default_addr_pool'].each do |default_addr_pool|
flags << "--default-addr-pool #{default_addr_pool}"
end
end
flags << "--default-addr-pool-mask-length '#{opts['default_addr_pool_mask_length']}'" if opts['default_addr_pool_mask_length'] && opts['default_addr_pool_mask_length'].to_s != 'undef'
flags << "--dispatcher-heartbeat '#{opts['dispatcher_heartbeat']}'" if opts['dispatcher_heartbeat'] && opts['dispatcher_heartbeat'].to_s != 'undef'
flags << "--external-ca '#{opts['external_ca']}'" if opts['external_ca'] && opts['external_ca'].to_s != 'undef'
flags << '--force-new-cluster' if opts['force_new_cluster'].to_s != 'false'
flags << "--listen-addr '#{opts['listen_addr']}'" if opts['listen_addr'] && opts['listen_addr'].to_s != 'undef'
flags << "--max-snapshots '#{opts['max_snapshots']}'" if opts['max_snapshots'] && opts['max_snapshots'].to_s != 'undef'
flags << "--snapshot-interval '#{opts['snapshot_interval']}'" if opts['snapshot_interval'] && opts['snapshot_interval'].to_s != 'undef'
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,23 @@
# frozen_string_literal: true
require 'shellwords'
#
# docker_swarm_join_flags.rb
#
module Puppet::Parser::Functions
# Transforms a hash into a string of docker swarm init flags
newfunction(:docker_swarm_join_flags, type: :rvalue) do |args|
opts = args[0] || {}
flags = []
flags << 'join' if opts['join'].to_s != 'false'
flags << "--advertise-addr '#{opts['advertise_addr']}'" if opts['advertise_addr'] && opts['advertise_addr'].to_s != 'undef'
flags << "--listen-addr \"#{opts['listen_addr']}\"" if opts['listen_addr'] && opts['listen_addr'].to_s != 'undef'
flags << "--token '#{opts['token']}'" if opts['token'] && opts['token'].to_s != 'undef'
flags.flatten.join(' ')
end
end

View file

@ -0,0 +1,109 @@
# frozen_string_literal: true
require 'deep_merge'
Puppet::Type.type(:docker_compose).provide(:ruby) do
desc 'Support for Puppet running Docker Compose'
mk_resource_methods
has_command(:docker, 'docker')
def set_tmpdir
return unless resource[:tmpdir]
# Check if the the tmpdir target exists
Puppet.warning("#{resource[:tmpdir]} (defined as docker_compose tmpdir) does not exist") unless Dir.exist?(resource[:tmpdir])
# Set TMPDIR environment variable only if defined among resources and exists
ENV['TMPDIR'] = resource[:tmpdir] if Dir.exist?(resource[:tmpdir])
end
def exists?
Puppet.info("Checking for compose project #{name}")
compose_services = {}
compose_containers = []
set_tmpdir
# get merged config using docker-compose config
args = ['compose', compose_files, '-p', name, 'config'].insert(3, resource[:options]).compact
compose_output = Puppet::Util::Yaml.safe_load(execute([command(:docker)] + args, combine: false), [Symbol])
containers = docker([
'ps',
'--format',
"'{{.Label \"com.docker.compose.service\"}}-{{.Image}}'",
'--filter',
"label=com.docker.compose.project=#{name}",
]).split("\n")
compose_containers.push(*containers)
compose_services = compose_output['services']
return false if compose_services.count != compose_containers.uniq.count
counts = Hash[*compose_services.each.map { |key, array|
image = array['image'] || get_image(key, compose_services)
Puppet.info("Checking for compose service #{key} #{image}")
[key, compose_containers.count("'#{key}-#{image}'")]
}.flatten]
# No containers found for the project
if counts.empty? ||
# Containers described in the compose file are not running
counts.any? { |_k, v| v.zero? } ||
# The scaling factors in the resource do not match the number of running containers
(resource[:scale] && counts.merge(resource[:scale]) != counts)
false
else
true
end
end
def get_image(service_name, compose_services)
image = compose_services[service_name]['image']
unless image
if compose_services[service_name]['extends']
image = get_image(compose_services[service_name]['extends'], compose_services)
elsif compose_services[service_name]['build']
image = "#{name}_#{service_name}"
end
end
image
end
def create
Puppet.info("Running compose project #{name}")
args = ['compose', compose_files, '-p', name, 'up', '-d', '--remove-orphans'].insert(3, resource[:options]).insert(5, resource[:up_args]).compact
docker(args)
return unless resource[:scale]
instructions = resource[:scale].map { |k, v| "#{k}=#{v}" }
Puppet.info("Scaling compose project #{name}: #{instructions.join(' ')}")
args = ['compose', compose_files, '-p', name, 'scale'].insert(3, resource[:options]).compact + instructions
docker(args)
end
def destroy
Puppet.info("Removing all containers for compose project #{name}")
kill_args = ['compose', compose_files, '-p', name, 'kill'].insert(3, resource[:options]).compact
docker(kill_args)
rm_args = ['compose', compose_files, '-p', name, 'rm', '--force', '-v'].insert(3, resource[:options]).compact
docker(rm_args)
end
def restart
return unless exists?
Puppet.info("Rebuilding and Restarting all containers for compose project #{name}")
kill_args = ['compose', compose_files, '-p', name, 'kill'].insert(3, resource[:options]).compact
docker(kill_args)
build_args = ['compose', compose_files, '-p', name, 'build'].insert(3, resource[:options]).compact
docker(build_args)
create
end
def compose_files
resource[:compose_files].map { |x| ['-f', x] }.flatten
end
end

View file

@ -0,0 +1,95 @@
# frozen_string_literal: true
require 'json'
Puppet::Type.type(:docker_network).provide(:ruby) do
desc 'Support for Docker Networking'
mk_resource_methods
has_command(:docker, 'docker')
def network_conf
flags = ['network', 'create']
multi_flags = ->(values, format) {
filtered = [values].flatten.compact
filtered.map { |val| format % val }
}
[
['--driver=%s', :driver],
['--subnet=%s', :subnet],
['--gateway=%s', :gateway],
['--ip-range=%s', :ip_range],
['--ipam-driver=%s', :ipam_driver],
['--aux-address=%s', :aux_address],
['--opt=%s', :options],
].each do |(format, key)|
values = resource[key]
new_flags = multi_flags.call(values, format)
flags.concat(new_flags)
end
if defined?(resource[:additional_flags])
additional_flags = []
if resource[:additional_flags].is_a?(String)
additional_flags = resource[:additional_flags].split
elsif resource[:additional_flags].is_a?(Array)
additional_flags = resource[:additional_flags]
end
additional_flags.each do |additional_flag|
flags << additional_flag
end
end
flags << resource[:name]
end
def self.instances
output = docker(['network', 'ls'])
lines = output.split("\n")
lines.shift # remove header row
lines.map do |line|
_, name, driver = line.split
inspect = docker(['network', 'inspect', name])
obj = JSON.parse(inspect).first
ipam_driver = (obj['IPAM']['Driver'] unless obj['IPAM']['Driver'].nil?)
subnet = (obj['IPAM']['Config'].first['Subnet'] if !(obj['IPAM']['Config'].nil? || obj['IPAM']['Config'].empty?) && (obj['IPAM']['Config'].first.key? 'Subnet'))
new(
name: name,
id: obj['Id'],
ipam_driver: ipam_driver,
subnet: subnet,
ensure: :present,
driver: driver,
)
end
end
def self.prefetch(resources)
instances.each do |prov|
if resource = resources[prov.name] # rubocop:disable Lint/AssignmentInCondition
resource.provider = prov
end
end
end
def flush
raise Puppet::Error, _('Docker network does not support mutating existing networks') if !@property_hash.empty? && @property_hash[:ensure] != :absent
end
def exists?
Puppet.info("Checking if docker network #{name} exists")
@property_hash[:ensure] == :present
end
def create
Puppet.info("Creating docker network #{name}")
docker(network_conf)
end
def destroy
Puppet.info("Removing docker network #{name}")
docker(['network', 'rm', name])
end
end

View file

@ -0,0 +1,88 @@
# frozen_string_literal: true
require 'deep_merge'
Puppet::Type.type(:docker_stack).provide(:ruby) do
desc 'Support for Puppet running Docker Stacks'
mk_resource_methods
has_command(:docker, 'docker')
def exists?
Puppet.info("Checking for stack #{name}")
stack_services = {}
stack_containers = []
resource[:compose_files].each do |file|
compose_file = YAML.safe_load(File.read(file), [], [], true)
# rubocop:disable Style/StringLiterals
containers = docker([
'ps',
'--format',
"{{.Label \"com.docker.swarm.service.name\"}}-{{.Image}}",
'--filter',
"label=com.docker.stack.namespace=#{name}",
]).split("\n").each do |c|
c.slice!("#{name}_")
end
stack_containers.push(*containers)
stack_containers.uniq!
# rubocop:enable Style/StringLiterals
case compose_file['version']
when %r{^3(\.[0-7])?$}
stack_services.merge!(compose_file['services'])
else
raise(Puppet::Error, "Unsupported docker compose file syntax version \"#{compose_file['version']}\"!")
end
end
return false if stack_services.count != stack_containers.count
counts = Hash[*stack_services.each.map { |key, array|
image = array['image'] || get_image(key, stack_services)
image = "#{image}:latest" unless image.include?(':')
Puppet.info("Checking for compose service #{key} #{image}")
["#{key}-#{image}", stack_containers.count("#{key}-#{image}")]
}.flatten]
# No containers found for the project
if counts.empty? ||
# Containers described in the compose file are not running
counts.any? { |_k, v| v.zero? }
false
else
true
end
end
def get_image(service_name, stack_services)
image = stack_services[service_name]['image']
unless image
if stack_services[service_name]['extends']
image = get_image(stack_services[service_name]['extends'], stack_services)
elsif stack_services[service_name]['build']
image = "#{name}_#{service_name}"
end
end
image
end
def create
Puppet.info("Running stack #{name}")
args = ['stack', 'deploy', compose_files, name].insert(1, bundle_file).insert(4, resource[:up_args]).compact
docker(args)
end
def destroy
Puppet.info("Removing docker stack #{name}")
rm_args = ['stack', 'rm', name]
docker(rm_args)
end
def bundle_file
return resource[:bundle_file].map { |x| ['-c', x] }.flatten unless resource[:bundle_file].nil?
end
def compose_files
resource[:compose_files].map { |x| ['-c', x] }.flatten
end
end

View file

@ -0,0 +1,70 @@
# frozen_string_literal: true
require 'json'
Puppet::Type.type(:docker_volume).provide(:ruby) do
desc 'Support for Docker Volumes'
mk_resource_methods
has_command(:docker, 'docker')
def volume_conf
flags = ['volume', 'create']
multi_flags = ->(values, format) {
filtered = [values].flatten.compact
filtered.map { |val| format % val }
}
[
['--driver=%s', :driver],
['--opt=%s', :options],
].each do |(format, key)|
values = resource[key]
new_flags = multi_flags.call(values, format)
flags.concat(new_flags)
end
flags << resource[:name]
end
def self.instances
output = docker(['volume', 'ls'])
lines = output.split("\n")
lines.shift # remove header row
lines.map do |line|
driver, name = line.split
inspect = docker(['volume', 'inspect', name])
obj = JSON.parse(inspect).first
new(
name: name,
mountpoint: obj['Mountpoint'],
options: obj['Options'],
ensure: :present,
driver: driver,
)
end
end
def self.prefetch(resources)
instances.each do |prov|
if (resource = resources[prov.name])
resource.provider = prov
end
end
end
def exists?
Puppet.info("Checking if docker volume #{name} exists")
@property_hash[:ensure] == :present
end
def create
Puppet.info("Creating docker volume #{name}")
docker(volume_conf)
end
def destroy
Puppet.info("Removing docker volume #{name}")
docker(['volume', 'rm', name])
end
end

View file

@ -0,0 +1,61 @@
# frozen_string_literal: true
Puppet::Type.newtype(:docker_compose) do
@doc = 'A type representing a Docker Compose file'
ensurable
def refresh
provider.restart
end
newparam(:scale) do
desc 'A hash of compose services and number of containers.'
validate do |value|
raise _('scale should be a Hash') unless value.is_a? Hash
raise _('The name of the compose service in scale should be a String') unless value.all? { |k, _v| k.is_a? String }
raise _('The number of containers in scale should be an Integer') unless value.all? { |_k, v| v.is_a? Integer }
end
end
newparam(:options) do
desc 'Additional options to be passed directly to docker-compose.'
validate do |value|
raise _('options should be an Array') unless value.is_a? Array
end
end
newparam(:up_args) do
desc 'Arguments to be passed directly to docker-compose up.'
validate do |value|
raise _('up_args should be a String') unless value.is_a? String
end
end
newparam(:compose_files, array_matching: :all) do
desc 'An array of Docker Compose Files paths.'
validate do |value|
raise _('compose files should be an array') unless value.is_a? Array
end
end
newparam(:name) do
isnamevar
desc 'The name of the project'
end
newparam(:tmpdir) do
desc "Override the temporary directory used by docker-compose.
This property is useful when the /tmp directory has been mounted
with the noexec option. Or is otherwise being prevented It allows the module consumer to redirect
docker-composes temporary files to a known directory.
The directory passed to this property must exist and be accessible
by the user that is executing the puppet agent.
"
validate do |value|
raise _('tmpdir should be a String') unless value.is_a? String
end
end
end

View file

@ -0,0 +1,50 @@
# frozen_string_literal: true
Puppet::Type.newtype(:docker_network) do
@doc = 'Type representing a Docker network'
ensurable
newparam(:name) do
isnamevar
desc 'The name of the network'
end
newproperty(:driver) do
desc 'The network driver used by the network'
end
newparam(:subnet, array_matching: :all) do
desc 'The subnet in CIDR format that represents a network segment'
end
newparam(:gateway) do
desc 'An ipv4 or ipv6 gateway for the server subnet'
end
newparam(:ip_range) do
desc 'The range of IP addresses used by the network'
end
newproperty(:ipam_driver) do
desc 'The IPAM (IP Address Management) driver'
end
newparam(:aux_address) do
desc 'Auxiliary ipv4 or ipv6 addresses used by the Network driver'
end
newparam(:options) do
desc 'Additional options for the network driver'
end
newparam(:additional_flags) do
desc "Additional flags for the 'docker network create'"
end
newproperty(:id) do
desc 'The ID of the network provided by Docker'
validate do |value|
raise(Puppet::ParseError, "#{value} is read-only and is only available via puppet resource.")
end
end
end

View file

@ -0,0 +1,33 @@
# frozen_string_literal: true
Puppet::Type.newtype(:docker_stack) do
@doc = 'A type representing a Docker Stack'
ensurable
newparam(:bundle_file) do
desc 'Path to a Distributed Application Bundle file.'
validate do |value|
raise _('bundle files should be a string') unless value.is_a? String
end
end
newparam(:compose_files, array_matching: :all) do
desc 'An array of Docker Compose Files paths.'
validate do |value|
raise _('compose files should be an array') unless value.is_a? Array
end
end
newparam(:up_args) do
desc 'Arguments to be passed directly to docker stack deploy.'
validate do |value|
raise _('up_args should be a String') unless value.is_a? String
end
end
newparam(:name) do
isnamevar
desc 'The name of the stack'
end
end

View file

@ -0,0 +1,26 @@
# frozen_string_literal: true
Puppet::Type.newtype(:docker_volume) do
@doc = 'A type representing a Docker volume'
ensurable
newparam(:name) do
isnamevar
desc 'The name of the volume'
end
newproperty(:driver) do
desc 'The volume driver used by the volume'
end
newparam(:options) do
desc 'Additional options for the volume driver'
end
newproperty(:mountpoint) do
desc 'The location that the volume is mounted to'
validate do |value|
raise(Puppet::ParseError, "#{value} is read-only and is only available via puppet resource.")
end
end
end

View file

@ -0,0 +1,54 @@
# @summary install Docker Compose using the recommended curl command.
#
# @param ensure
# Whether to install or remove Docker Compose
# Valid values are absent present
#
# @param version
# The version of Docker Compose to install.
#
class docker::compose (
Enum[present,absent] $ensure = present,
Optional[String] $version = undef,
) {
include docker
if $docker::manage_package {
include docker::params
$_version = $version ? {
undef => $docker::params::compose_version,
default => $version,
}
if $_version and $ensure != 'absent' {
$package_ensure = $_version
} else {
$package_ensure = $ensure
}
case $facts['os']['family'] {
'Debian': {
$_require = $docker::use_upstream_package_source ? {
true => [Apt::Source['docker'], Class['apt::update']],
false => undef,
}
}
'RedHat': {
$_require = $docker::use_upstream_package_source ? {
true => Yumrepo['docker'],
false => undef,
}
}
'Windows': {
fail('The docker compose portion of this module is not supported on Windows')
}
default: {
fail('The docker compose portion of this module only works on Debian or RedHat')
}
}
package { 'docker-compose-plugin':
ensure => $package_ensure,
require => $_require,
}
}
}

View file

@ -0,0 +1,16 @@
# @summary Configuration for docker
# @api private
#
class docker::config {
if $facts['os']['family'] != 'windows' {
$docker::docker_users.each |$user| {
docker::system_user { $user:
create_user => $docker::create_user,
}
}
} else {
$docker::docker_users.each |$user| {
docker::windows_account { $user: }
}
}
}

View file

@ -0,0 +1,81 @@
# @summary
# A define which executes a command inside a container.
#
# @param detach
# @param interactive
# @param env
# @param tty
# @param container
# @param command
# @param unless
# @param sanitise_name
# @param refreshonly
# @param onlyif
#
define docker::exec (
Boolean $detach = false,
Boolean $interactive = false,
Array $env = [],
Boolean $tty = false,
Optional[String] $container = undef,
Optional[String] $command = undef,
Optional[String] $unless = undef,
Boolean $sanitise_name = true,
Boolean $refreshonly = false,
Optional[String] $onlyif = undef,
) {
include docker::params
$docker_command = $docker::params::docker_command
if $facts['os']['family'] == 'windows' {
$exec_environment = "PATH=${facts['docker_program_files_path']}/Docker/"
$exec_timeout = 3000
$exec_path = ["${facts['docker_program_files_path']}/Docker/",]
$exec_provider = 'powershell'
} else {
$exec_environment = 'HOME=/root'
$exec_path = ['/bin', '/usr/bin',]
$exec_timeout = 0
$exec_provider = undef
}
$docker_exec_flags = docker_exec_flags({
detach => $detach,
interactive => $interactive,
tty => $tty,
env => any2array($env),
}
)
if $sanitise_name {
$sanitised_container = regsubst($container, '[^0-9A-Za-z.\-_]', '-', 'G')
} else {
$sanitised_container = $container
}
$exec = "${docker_command} exec ${docker_exec_flags} ${sanitised_container} ${command}"
$unless_command = $unless ? {
undef => undef,
'' => undef,
default => "${docker_command} exec ${docker_exec_flags} ${sanitised_container} ${$unless}",
}
$onlyif_command = $onlyif ? {
undef => undef,
'' => undef,
'running' => "${docker_command} ps --no-trunc --format='table {{.Names}}' | grep '^${sanitised_container}$'",
default => $onlyif
}
exec { $exec:
environment => $exec_environment,
onlyif => $onlyif_command,
path => $exec_path,
refreshonly => $refreshonly,
timeout => $exec_timeout,
provider => $exec_provider,
unless => $unless_command,
}
}

View file

@ -0,0 +1,185 @@
# @summary
# Module to install an up-to-date version of a Docker image
# from the registry
#
# @param ensure
# Whether you want the image present or absent.
#
# @param image
# If you want the name of the image to be different from the
# name of the puppet resource you can pass a value here.
#
# @param image_tag
# If you want a specific tag of the image to be installed
#
# @param image_digest
# If you want a specific content digest of the image to be installed
#
# @param docker_file
# If you want to add a docker image from specific docker file
#
# @param docker_tar
# If you want to load a docker image from specific docker tar
#
# @param force
#
# @param docker_dir
#
define docker::image (
Enum[present,absent,latest] $ensure = 'present',
Optional[Pattern[/^[\S]*$/]] $image = $title,
Optional[String] $image_tag = undef,
Optional[String] $image_digest = undef,
Boolean $force = false,
Optional[String] $docker_file = undef,
Optional[String] $docker_dir = undef,
Optional[String] $docker_tar = undef,
) {
include docker::params
$docker_command = $docker::params::docker_command
if $facts['os']['family'] == 'windows' {
$update_docker_image_template = 'docker/windows/update_docker_image.ps1.epp'
$update_docker_image_path = "${facts['docker_user_temp_path']}/update_docker_image.ps1"
$exec_environment = "PATH=${facts['docker_program_files_path']}/Docker/"
$exec_timeout = 3000
$update_docker_image_owner = undef
$exec_path = ["${facts['docker_program_files_path']}/Docker/",]
$exec_provider = 'powershell'
} else {
$update_docker_image_template = 'docker/update_docker_image.sh.epp'
$update_docker_image_path = '/usr/local/bin/update_docker_image.sh'
$update_docker_image_owner = 'root'
$exec_environment = 'HOME=/root'
$exec_path = ['/bin', '/usr/bin',]
$exec_timeout = 0
$exec_provider = undef
}
$parameters = {
'docker_command' => $docker_command,
}
# Wrapper used to ensure images are up to date
ensure_resource('file', $update_docker_image_path,
{
ensure => $docker::params::ensure,
owner => $update_docker_image_owner,
group => $update_docker_image_owner,
mode => '0555',
content => epp($update_docker_image_template, $parameters),
}
)
if ($docker_file) and ($docker_tar) {
fail('docker::image must not have both $docker_file and $docker_tar set')
}
if ($docker_dir) and ($docker_tar) {
fail('docker::image must not have both $docker_dir and $docker_tar set')
}
if ($image_digest) and ($docker_file) {
fail('docker::image must not have both $image_digest and $docker_file set')
}
if ($image_digest) and ($docker_dir) {
fail('docker::image must not have both $image_digest and $docker_dir set')
}
if ($image_digest) and ($docker_tar) {
fail('docker::image must not have both $image_digest and $docker_tar set')
}
if $force {
$image_force = '-f '
} else {
$image_force = ''
}
if $image_tag {
$image_arg = "${image}:${image_tag}"
$image_remove = "${docker_command} rmi ${image_force}${image}:${image_tag}"
$image_find = "${docker_command} images -q ${image}:${image_tag}"
} elsif $image_digest {
$image_arg = "${image}@${image_digest}"
$image_remove = "${docker_command} rmi ${image_force}${image}:${image_digest}"
$image_find = "${docker_command} images -q ${image}@${image_digest}"
} else {
$image_arg = $image
$image_remove = "${docker_command} rmi ${image_force}${image}"
$image_find = "${docker_command} images -q ${image}"
}
if $facts['os']['family'] == 'windows' {
$_image_find = "If (-not (${image_find}) ) { Exit 1 }"
} else {
$_image_find = "${image_find} | grep ."
}
if ($docker_dir) and ($docker_file) {
$image_install = "${docker_command} build -t ${image_arg} -f ${docker_file} ${docker_dir}"
} elsif $docker_dir {
$image_install = "${docker_command} build -t ${image_arg} ${docker_dir}"
} elsif $docker_file {
if $facts['os']['family'] == windows {
$image_install = "Get-Content ${docker_file} -Raw | ${docker_command} build -t ${image_arg} -"
} else {
$image_install = "${docker_command} build -t ${image_arg} - < ${docker_file}"
}
} elsif $docker_tar {
$image_install = "${docker_command} load -i ${docker_tar}"
} else {
if $facts['os']['family'] == 'windows' {
$image_install = "& ${update_docker_image_path} -DockerImage ${image_arg}"
} else {
$image_install = "${update_docker_image_path} ${image_arg}"
}
}
if $ensure == 'absent' {
exec { $image_remove:
path => $exec_path,
environment => $exec_environment,
onlyif => $_image_find,
provider => $exec_provider,
timeout => $exec_timeout,
logoutput => true,
}
} elsif $ensure == 'latest' or $image_tag == 'latest' or $force {
notify { "Check if image ${image_arg} is in-sync":
noop => false,
}
~> exec { $image_install:
environment => $exec_environment,
path => $exec_path,
timeout => $exec_timeout,
returns => ['0', '2'],
require => File[$update_docker_image_path],
provider => $exec_provider,
logoutput => true,
}
~> exec { "echo 'Update of ${image_arg} complete'":
environment => $exec_environment,
path => $exec_path,
timeout => $exec_timeout,
require => File[$update_docker_image_path],
provider => $exec_provider,
logoutput => true,
refreshonly => true,
}
} elsif $ensure == 'present' {
exec { $image_install:
unless => $_image_find,
environment => $exec_environment,
path => $exec_path,
timeout => $exec_timeout,
returns => ['0', '2'],
require => File[$update_docker_image_path],
provider => $exec_provider,
logoutput => true,
}
}
Docker::Image <| title == $title |>
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param images
#
class docker::images (
Hash $images
) {
create_resources(docker::image, $images)
}

View file

@ -0,0 +1,652 @@
# @summary
# Module to install an up-to-date version of Docker from package.
#
# @param version
# The package version to install, used to set the package name.
#
# @param ensure
# Passed to the docker package.
#
# @param prerequired_packages
# An array of additional packages that need to be installed to support docker.
#
# @param dependent_packages
# An array of packages installed by the docker-ce package v 18.09 and later.
# Used when uninstalling to ensure containers cannot be run on the system.
#
# @param tcp_bind
# The tcp socket to bind to in the format
# tcp://127.0.0.1:4243
#
# @param tls_enable
# Enable TLS.
#
# @param tls_verify
# Use TLS and verify the remote
#
# @param tls_cacert
# Path to TLS CA certificate
#
# @param tls_cert
# Path to TLS certificate file
#
# @param tls_key
# Path to TLS key file
#
# @param ip_forward
# Enables IP forwarding on the Docker host.
#
# @param iptables
# Enable Docker's addition of iptables rules.
#
# @param ip_masq
# Enable IP masquerading for bridge's IP range.
#
# @param icc
# Enable or disable Docker's unrestricted inter-container and Docker daemon host communication.
# (Requires iptables=true to disable)
#
# @param bip
# Specify docker's network bridge IP, in CIDR notation.
#
# @param mtu
# Docker network MTU.
#
# @param bridge
# Attach containers to a pre-existing network bridge
# use 'none' to disable container networking
#
# @param fixed_cidr
# IPv4 subnet for fixed IPs
# 10.20.0.0/16
#
# @param default_gateway
# IPv4 address of the container default gateway;
# this address must be part of the bridge subnet
# (which is defined by bridge)
#
# @param ipv6
# Enables ipv6 support for the docker daemon
#
# @param ipv6_cidr
# IPv6 subnet for fixed IPs
#
# @param default_gateway_ipv6
# IPv6 address of the container default gateway:
#
# @param socket_bind
# The unix socket to bind to.
#
# @param log_level
# Set the logging level
# Valid values: debug, info, warn, error, fatal
#
# @param log_driver
# Set the log driver.
# Docker default is json-file.
# Please verify the value by yourself, before setting it. Valid shipped log drivers can be found here:
# https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
# Since custom log driver plugins are and must be possible, the value can not be verified through code here.
#
# @param log_opt
# Set the log driver specific options
# Valid values per log driver:
# none : undef
# local :
# max-size=[0-9+][k|m|g]
# max-file=[0-9+]
# json-file:
# max-size=[0-9+][k|m|g]
# max-file=[0-9+]
# syslog :
# syslog-address=[tcp|udp]://host:port
# syslog-address=unix://path
# syslog-facility=daemon|kern|user|mail|auth|
# syslog|lpr|news|uucp|cron|
# authpriv|ftp|
# local0|local1|local2|local3|
# local4|local5|local6|local7
# syslog-tag="some_tag"
# journald : undef
# gelf :
# gelf-address=udp://host:port
# gelf-tag="some_tag"
# fluentd :
# fluentd-address=host:port
# fluentd-tag={{.ID}} - short container id (12 characters)|
# {{.FullID}} - full container id
# {{.Name}} - container name
# splunk :
# splunk-token=<splunk_http_event_collector_token>
# splunk-url=https://your_splunk_instance:8088
# awslogs :
# awslogs-group=<Cloudwatch Log Group>
# awslogs-stream=<Cloudwatch Log Stream>
# awslogs-create-group=true|false
# awslogs-datetime-format=<Date format> - strftime expression
# awslogs-multiline-pattern=multiline start pattern using a regular expression
# tag={{.ID}} - short container id (12 characters)|
# {{.FullID}} - full container id
# {{.Name}} - container name
#
# @param selinux_enabled
# Enable selinux support. Default is false. SELinux does not presently
# support the BTRFS storage driver.
#
# @param use_upstream_package_source
# Whether or not to use the upstream package source.
# If you run your own package mirror, you may set this
# to false.
#
# @param pin_upstream_package_source
# Pin upstream package source; this option currently only has any effect on
# apt-based distributions. Set to false to remove pinning on the upstream
# package repository. See also "apt_source_pin_level".
#
# @param apt_source_pin_level
# What level to pin our source package repository to; this only is relevent
# if you're on an apt-based system (Debian, Ubuntu, etc) and
# $use_upstream_package_source is set to true. Set this to false to disable
# pinning, and undef to ensure the apt preferences file apt::source uses to
# define pins is removed.
#
# @param service_state
# Whether you want to docker daemon to start up
#
# @param service_enable
# Whether you want to docker daemon to start up at boot
#
# @param manage_service
# Specify whether the service should be managed.
#
# @param root_dir
# Custom root directory for containers
#
# @param dns
# Custom dns server address
#
# @param dns_search
# Custom dns search domains
#
# @param socket_group
# Group ownership of the unix control socket.
#
# @param extra_parameters
# Any extra parameters that should be passed to the docker daemon.
#
# @param shell_values
# Array of shell values to pass into init script config files
#
# @param proxy
# Will set the http_proxy and https_proxy env variables in /etc/sysconfig/docker (redhat/centos) or /etc/default/docker (debian)
#
# @param no_proxy
# Will set the no_proxy variable in /etc/sysconfig/docker (redhat/centos) or /etc/default/docker (debian)
#
# @param storage_driver
# Specify a storage driver to use
# Valid values: aufs, devicemapper, btrfs, overlay, overlay2, vfs, zfs
#
# @param dm_basesize
# The size to use when creating the base device, which limits the size of images and containers.
#
# @param dm_fs
# The filesystem to use for the base image (xfs or ext4)
#
# @param dm_mkfsarg
# Specifies extra mkfs arguments to be used when creating the base device.
#
# @param dm_mountopt
# Specifies extra mount options used when mounting the thin devices.
#
# @param dm_blocksize
# A custom blocksize to use for the thin pool.
# Default blocksize is 64K.
# Warning: _DO NOT_ change this parameter after the lvm devices have been initialized.
#
# @param dm_loopdatasize
# Specifies the size to use when creating the loopback file for the "data" device which is used for the thin pool
#
# @param dm_loopmetadatasize
# Specifies the size to use when creating the loopback file for the "metadata" device which is used for the thin pool
#
# @param dm_datadev
# (deprecated - dm_thinpooldev should be used going forward)
# A custom blockdevice to use for data for the thin pool.
#
# @param dm_metadatadev
# (deprecated - dm_thinpooldev should be used going forward)
# A custom blockdevice to use for metadata for the thin pool.
#
# @param dm_thinpooldev
# Specifies a custom block storage device to use for the thin pool.
#
# @param dm_use_deferred_removal
# Enables use of deferred device removal if libdm and the kernel driver support the mechanism.
#
# @param dm_use_deferred_deletion
# Enables use of deferred device deletion if libdm and the kernel driver support the mechanism.
#
# @param dm_blkdiscard
# Enables or disables the use of blkdiscard when removing devicemapper devices.
#
# @param dm_override_udev_sync_check
# By default, the devicemapper backend attempts to synchronize with the udev
# device manager for the Linux kernel. This option allows disabling that
# synchronization, to continue even though the configuration may be buggy.
#
# @param overlay2_override_kernel_check
# Overrides the Linux kernel version check allowing using overlay2 with kernel < 4.0.
#
# @param manage_package
# Won't install or define the docker package, useful if you want to use your own package
#
# @param service_name
# Specify custom service name
#
# @param docker_users
# Specify an array of users to add to the docker group
#
# @param create_user
# If `true` the list of `docker_users` will be created as well as added to the docker group
#
# @param docker_group
# Specify a string for the docker group
#
# @param daemon_environment_files
# Specify additional environment files to add to the
# service-overrides.conf
#
# @param repo_opt
# Specify a string to pass as repository options (RedHat only)
#
# @param storage_devs
# A quoted, space-separated list of devices to be used.
#
# @param storage_vg
# The volume group to use for docker storage.
#
# @param storage_root_size
# The size to which the root filesystem should be grown.
#
# @param storage_data_size
# The desired size for the docker data LV
#
# @param storage_min_data_size
# The minimum size of data volume otherwise pool creation fails
#
# @param storage_chunk_size
# Controls the chunk size/block size of thin pool.
#
# @param storage_growpart
# Enable resizing partition table backing root volume group.
#
# @param storage_auto_extend_pool
# Enable/disable automatic pool extension using lvm
#
# @param storage_pool_autoextend_threshold
# Auto pool extension threshold (in % of pool size)
#
# @param storage_pool_autoextend_percent
# Extend the pool by specified percentage when threshold is hit.
#
# @param tmp_dir_config
# Whether to set the TMPDIR value in the systemd config file
# Default: true (set the value); false will comment out the line.
# Note: false is backwards compatible prior to PR #58
#
# @param tmp_dir
# Sets the tmp dir for Docker (path)
#
# @param registry_mirror
# Sets the prefered container registry mirror.
#
# @param nuget_package_provider_version
# The version of the NuGet Package provider
#
# @param docker_msft_provider_version
# The version of the Microsoft Docker Provider Module
#
# @param docker_ce_start_command
# @param docker_ce_package_name
# @param docker_ce_cli_package_name
# @param docker_ce_source_location
# @param docker_ce_key_source
# @param docker_ce_key_id
# @param docker_ce_release
# @param docker_package_location
# @param docker_package_key_source
# @param docker_package_key_check_source
# @param docker_package_key_id
# @param docker_package_release
# @param docker_engine_start_command
# @param docker_engine_package_name
# @param docker_ce_channel
# @param docker_ee
# @param docker_ee_package_name
# @param docker_ee_source_location
# @param docker_ee_key_source
# @param docker_ee_key_id
# @param docker_ee_repos
# @param docker_ee_release
# @param package_release
# @param labels
# @param execdriver
# @param package_source
# @param os_lc
# @param storage_config
# @param storage_config_template
# @param storage_setup_file
# @param service_provider
# @param service_config
# @param service_config_template
# @param service_overrides_template
# @param socket_overrides_template
# @param socket_override
# @param service_after_override
# @param service_hasstatus
# @param service_hasrestart
# @param acknowledge_unsupported_os
# @param have_systemd_v230
#
class docker (
Optional[String] $version = $docker::params::version,
String $ensure = $docker::params::ensure,
Variant[Array[String], Hash] $prerequired_packages = $docker::params::prerequired_packages,
Array $dependent_packages = $docker::params::dependent_packages,
String $docker_ce_start_command = $docker::params::docker_ce_start_command,
Optional[String] $docker_ce_package_name = $docker::params::docker_ce_package_name,
String[1] $docker_ce_cli_package_name = $docker::params::docker_ce_cli_package_name,
Optional[String] $docker_ce_source_location = $docker::params::package_ce_source_location,
Optional[String] $docker_ce_key_source = $docker::params::package_ce_key_source,
Optional[String] $docker_ce_key_id = $docker::params::package_ce_key_id,
Optional[String] $docker_ce_release = $docker::params::package_ce_release,
Optional[String] $docker_package_location = $docker::params::package_source_location,
Optional[String] $docker_package_key_source = $docker::params::package_key_source,
Optional[Boolean] $docker_package_key_check_source = $docker::params::package_key_check_source,
Optional[String] $docker_package_key_id = $docker::params::package_key_id,
Optional[String] $docker_package_release = $docker::params::package_release,
String $docker_engine_start_command = $docker::params::docker_engine_start_command,
String $docker_engine_package_name = $docker::params::docker_engine_package_name,
String $docker_ce_channel = $docker::params::docker_ce_channel,
Optional[Boolean] $docker_ee = $docker::params::docker_ee,
Optional[String] $docker_ee_package_name = $docker::params::package_ee_package_name,
Optional[String] $docker_ee_source_location = $docker::params::package_ee_source_location,
Optional[String] $docker_ee_key_source = $docker::params::package_ee_key_source,
Optional[String] $docker_ee_key_id = $docker::params::package_ee_key_id,
Optional[String] $docker_ee_repos = $docker::params::package_ee_repos,
Optional[String] $docker_ee_release = $docker::params::package_ee_release,
Optional[Variant[String,Array[String]]] $tcp_bind = $docker::params::tcp_bind,
Boolean $tls_enable = $docker::params::tls_enable,
Boolean $tls_verify = $docker::params::tls_verify,
Optional[String] $tls_cacert = $docker::params::tls_cacert,
Optional[String] $tls_cert = $docker::params::tls_cert,
Optional[String] $tls_key = $docker::params::tls_key,
Boolean $ip_forward = $docker::params::ip_forward,
Boolean $ip_masq = $docker::params::ip_masq,
Optional[Boolean] $ipv6 = $docker::params::ipv6,
Optional[String] $ipv6_cidr = $docker::params::ipv6_cidr,
Optional[String] $default_gateway_ipv6 = $docker::params::default_gateway_ipv6,
Optional[String] $bip = $docker::params::bip,
Optional[String] $mtu = $docker::params::mtu,
Boolean $iptables = $docker::params::iptables,
Optional[Boolean] $icc = $docker::params::icc,
String $socket_bind = $docker::params::socket_bind,
Optional[String] $fixed_cidr = $docker::params::fixed_cidr,
Optional[String] $bridge = $docker::params::bridge,
Optional[String] $default_gateway = $docker::params::default_gateway,
Optional[String] $log_level = $docker::params::log_level,
Optional[String] $log_driver = $docker::params::log_driver,
Array $log_opt = $docker::params::log_opt,
Optional[Boolean] $selinux_enabled = $docker::params::selinux_enabled,
Optional[Boolean] $use_upstream_package_source = $docker::params::use_upstream_package_source,
Optional[Boolean] $pin_upstream_package_source = $docker::params::pin_upstream_package_source,
Optional[Integer] $apt_source_pin_level = $docker::params::apt_source_pin_level,
Optional[String] $package_release = $docker::params::package_release,
String $service_state = $docker::params::service_state,
Boolean $service_enable = $docker::params::service_enable,
Boolean $manage_service = $docker::params::manage_service,
Optional[String] $root_dir = $docker::params::root_dir,
Optional[Boolean] $tmp_dir_config = $docker::params::tmp_dir_config,
Optional[String] $tmp_dir = $docker::params::tmp_dir,
Optional[Variant[String,Array]] $dns = $docker::params::dns,
Optional[Variant[String,Array]] $dns_search = $docker::params::dns_search,
Optional[Variant[String,Boolean]] $socket_group = $docker::params::socket_group,
Array $labels = $docker::params::labels,
Optional[Variant[String,Array]] $extra_parameters = undef,
Optional[Variant[String,Array]] $shell_values = undef,
Optional[String] $proxy = $docker::params::proxy,
Optional[String] $no_proxy = $docker::params::no_proxy,
Optional[String] $storage_driver = $docker::params::storage_driver,
Optional[String] $dm_basesize = $docker::params::dm_basesize,
Optional[String] $dm_fs = $docker::params::dm_fs,
Optional[String] $dm_mkfsarg = $docker::params::dm_mkfsarg,
Optional[String] $dm_mountopt = $docker::params::dm_mountopt,
Optional[String] $dm_blocksize = $docker::params::dm_blocksize,
Optional[String] $dm_loopdatasize = $docker::params::dm_loopdatasize,
Optional[String] $dm_loopmetadatasize = $docker::params::dm_loopmetadatasize,
Optional[String] $dm_datadev = $docker::params::dm_datadev,
Optional[String] $dm_metadatadev = $docker::params::dm_metadatadev,
Optional[String] $dm_thinpooldev = $docker::params::dm_thinpooldev,
Optional[Boolean] $dm_use_deferred_removal = $docker::params::dm_use_deferred_removal,
Optional[Boolean] $dm_use_deferred_deletion = $docker::params::dm_use_deferred_deletion,
Optional[Boolean] $dm_blkdiscard = $docker::params::dm_blkdiscard,
Optional[Boolean] $dm_override_udev_sync_check = $docker::params::dm_override_udev_sync_check,
Boolean $overlay2_override_kernel_check = $docker::params::overlay2_override_kernel_check,
Optional[String] $execdriver = $docker::params::execdriver,
Boolean $manage_package = $docker::params::manage_package,
Optional[String] $package_source = $docker::params::package_source,
Optional[String] $service_name = $docker::params::service_name,
Array $docker_users = [],
Boolean $create_user = true,
String $docker_group = $docker::params::docker_group,
Array $daemon_environment_files = [],
Optional[Variant[String,Hash]] $repo_opt = $docker::params::repo_opt,
Optional[String] $os_lc = $docker::params::os_lc,
Optional[String] $storage_devs = $docker::params::storage_devs,
Optional[String] $storage_vg = $docker::params::storage_vg,
Optional[String] $storage_root_size = $docker::params::storage_root_size,
Optional[String] $storage_data_size = $docker::params::storage_data_size,
Optional[String] $storage_min_data_size = $docker::params::storage_min_data_size,
Optional[String] $storage_chunk_size = $docker::params::storage_chunk_size,
Optional[Boolean] $storage_growpart = $docker::params::storage_growpart,
Optional[String] $storage_auto_extend_pool = $docker::params::storage_auto_extend_pool,
Optional[String] $storage_pool_autoextend_threshold = $docker::params::storage_pool_autoextend_threshold,
Optional[String] $storage_pool_autoextend_percent = $docker::params::storage_pool_autoextend_percent,
Optional[Variant[String,Boolean]] $storage_config = $docker::params::storage_config,
Optional[String] $storage_config_template = $docker::params::storage_config_template,
Optional[String] $storage_setup_file = $docker::params::storage_setup_file,
Optional[String] $service_provider = $docker::params::service_provider,
Optional[Variant[String,Boolean]] $service_config = $docker::params::service_config,
Optional[String] $service_config_template = $docker::params::service_config_template,
Optional[Variant[String,Boolean]] $service_overrides_template = $docker::params::service_overrides_template,
Optional[Variant[String,Boolean]] $socket_overrides_template = $docker::params::socket_overrides_template,
Optional[Boolean] $socket_override = $docker::params::socket_override,
Optional[Variant[String,Boolean]] $service_after_override = $docker::params::service_after_override,
Optional[Boolean] $service_hasstatus = $docker::params::service_hasstatus,
Optional[Boolean] $service_hasrestart = $docker::params::service_hasrestart,
Optional[Variant[String,Array]] $registry_mirror = $docker::params::registry_mirror,
Boolean $acknowledge_unsupported_os = false,
# Windows specific parameters
Optional[String] $docker_msft_provider_version = $docker::params::docker_msft_provider_version,
Optional[String] $nuget_package_provider_version = $docker::params::nuget_package_provider_version,
Boolean $have_systemd_v230 = $docker::params::have_systemd_v230,
) inherits docker::params {
if $facts['os']['family'] and ! $acknowledge_unsupported_os {
assert_type(Pattern[/^(Debian|RedHat|windows)$/], $facts['os']['family']) |$a, $b| {
fail('This module only works on Debian, Red Hat or Windows based systems.')
}
if ($facts['os']['family'] == 'RedHat') and ($facts['os']['name'] != 'Amazon') and (versioncmp($facts['os']['release']['major'], '7') < 0) {
fail('This module only works on Red Hat based systems version 7 and higher.')
} elsif ($facts['os']['name'] == 'Amazon') and ($facts['os']['release']['major'] != '2') and (versioncmp($facts['os']['release']['major'], '2022') < 0) {
fail('This module only works on Amazon Linux 2 and newer systems.')
}
}
if ($default_gateway) and (!$bridge) {
fail('You must provide the $bridge parameter.')
}
if $log_level {
assert_type(Pattern[/^(debug|info|warn|error|fatal)$/], $log_level) |$a, $b| {
fail('log_level must be one of debug, info, warn, error or fatal')
}
}
if $storage_driver {
if $facts['os']['family'] == 'windows' {
assert_type(Pattern[/^(windowsfilter)$/], $storage_driver) |$a, $b| {
fail('Valid values for storage_driver on windows are windowsfilter')
}
} else {
assert_type(Pattern[/^(aufs|devicemapper|btrfs|overlay|overlay2|vfs|zfs)$/], $storage_driver) |$a, $b| {
fail('Valid values for storage_driver are aufs, devicemapper, btrfs, overlay, overlay2, vfs, zfs.')
}
}
}
if ($bridge) and ($facts['os']['family'] == 'windows') {
assert_type(Pattern[/^(none|nat|transparent|overlay|l2bridge|l2tunnel)$/], $bridge) |$a, $b| {
fail('bridge must be one of none, nat, transparent, overlay, l2bridge or l2tunnel on Windows.')
}
}
if $dm_fs {
assert_type(Pattern[/^(ext4|xfs)$/], $dm_fs) |$a, $b| {
fail('Only ext4 and xfs are supported currently for dm_fs.')
}
}
if ($dm_loopdatasize or $dm_loopmetadatasize) and ($dm_datadev or $dm_metadatadev) {
fail('You should provide parameters only for loop lvm or direct lvm, not both.')
}
if ($dm_datadev or $dm_metadatadev) and $dm_thinpooldev {
fail('You can use the $dm_thinpooldev parameter, or the $dm_datadev and $dm_metadatadev parameter pair, but you cannot use both.') # lint:ignore:140chars
}
if ($dm_datadev or $dm_metadatadev) {
notice('The $dm_datadev and $dm_metadatadev parameter pair are deprecated. The $dm_thinpooldev parameter should be used instead.')
}
if ($dm_datadev and !$dm_metadatadev) or (!$dm_datadev and $dm_metadatadev) {
fail('You need to provide both $dm_datadev and $dm_metadatadev parameters for direct lvm.')
}
if ($dm_basesize or $dm_fs or $dm_mkfsarg or $dm_mountopt or $dm_blocksize or $dm_loopdatasize or $dm_loopmetadatasize or $dm_datadev or $dm_metadatadev) and ($storage_driver != 'devicemapper') {
fail('Values for dm_ variables will be ignored unless storage_driver is set to devicemapper.')
}
if($tls_enable) {
if(! $tcp_bind) {
fail('You need to provide tcp bind parameter for TLS.')
}
}
if ($version == undef) or ($version !~ /^(17[.][0-1][0-9][.][0-1](~|-|\.)ce|1.\d+)/) {
if ($docker_ee) {
$package_location = $docker::docker_ee_source_location
$package_key_source = $docker::docker_ee_key_source
$package_key_check_source = $docker_package_key_check_source
$package_key = $docker::docker_ee_key_id
$package_repos = $docker::docker_ee_repos
$release = $docker::docker_ee_release
$docker_start_command = $docker::docker_ee_start_command
$docker_package_name = $docker::docker_ee_package_name
} else {
case $facts['os']['family'] {
'Debian' : {
$package_location = $docker_ce_source_location
$package_key_source = $docker_ce_key_source
$package_key = $docker_ce_key_id
$package_repos = $docker_ce_channel
$release = $docker_ce_release
}
'RedHat' : {
$package_location = $docker_ce_source_location
$package_key_source = $docker_ce_key_source
$package_key_check_source = $docker_package_key_check_source
}
'windows': {
fail('This module only work for Docker Enterprise Edition on Windows.')
}
default: {
$package_location = $docker_package_location
$package_key_source = $docker_package_key_source
$package_key_check_source = $docker_package_key_check_source
}
}
$docker_start_command = $docker_ce_start_command
$docker_package_name = $docker_ce_package_name
}
} else {
case $facts['os']['family'] {
'Debian': {
$package_location = $docker_package_location
$package_key_source = $docker_package_key_source
$package_key_check_source = $docker_package_key_check_source
$package_key = $docker_package_key_id
$package_repos = 'main'
$release = $docker_package_release
}
'RedHat': {
$package_location = $docker_package_location
$package_key_source = $docker_package_key_source
$package_key_check_source = $docker_package_key_check_source
}
default: {
$package_location = $docker_package_location
$package_key_source = $docker_package_key_source
$package_key_check_source = $docker_package_key_check_source
}
}
$docker_start_command = $docker_engine_start_command
$docker_package_name = $docker_engine_package_name
}
if ($version != undef) and ($version =~ /^(17[.]0[0-4]|1.\d+)/) {
$root_dir_flag = '-g'
} else {
$root_dir_flag = '--data-root'
}
if $ensure != 'absent' {
contain docker::repos
contain docker::install
contain docker::config
contain docker::service
create_resources(
'docker::registry',
lookup("${module_name}::registries", Hash, 'deep', {}),
)
create_resources(
'docker::image',
lookup("${module_name}::images", Hash, 'deep', {}),
)
create_resources(
'docker::run',
lookup("${module_name}::runs", Hash, 'deep', {}),
)
Class['docker::repos']
-> Class['docker::install']
-> Class['docker::config']
-> Class['docker::service']
-> Docker::Registry <||>
-> Docker::Image <||>
-> Docker::Run <||>
-> Docker_compose <||>
} else {
contain 'docker::repos'
contain 'docker::install'
Class['docker::repos'] -> Class['docker::install']
}
}

View file

@ -0,0 +1,153 @@
# @summary
# Module to install an up-to-date version of Docker from a package repository.
# Only for Debian, Red Hat and Windows
#
# @param version
# The package version to install, used to set the package name.
#
# @param nuget_package_provider_version
# The version of the NuGet Package provider
#
# @param docker_msft_provider_version
# The version of the Microsoft Docker Provider Module
#
# @param docker_ee_package_name
# The name of the Docker Enterprise Edition package
#
# @param docker_download_url
#
# @param dependent_packages
#
class docker::install (
Optional[String] $version = $docker::version,
Optional[String] $nuget_package_provider_version = $docker::nuget_package_provider_version,
Optional[String] $docker_msft_provider_version = $docker::docker_msft_provider_version,
Optional[String] $docker_ee_package_name = $docker::docker_ee_package_name,
Optional[String] $docker_download_url = $docker::package_location,
Array $dependent_packages = $docker::dependent_packages,
) {
$docker_start_command = $docker::docker_start_command
if $facts['os']['family'] and ! $docker::acknowledge_unsupported_os {
assert_type(Pattern[/^(Debian|RedHat|windows)$/], $facts['os']['family']) |$a, $b| {
fail('This module only works on Debian, RedHat or Windows.')
}
}
if $docker::version and $docker::ensure != 'absent' {
$ensure = $docker::version
} else {
$ensure = $docker::ensure
}
if $docker::manage_package {
if empty($docker::repo_opt) {
$docker_hash = {}
} else {
$docker_hash = { 'install_options' => $docker::repo_opt }
}
if $docker::package_source {
if $facts['os']['family'] == 'windows' {
fail('Custom package source is currently not implemented on windows.')
}
case $docker::package_source {
/docker-engine/ : {
ensure_resource('package', 'docker', stdlib::merge($docker_hash, {
ensure => $ensure,
source => $docker::package_source,
name => $docker::docker_engine_package_name,
}))
}
/docker-ce/ : {
ensure_resource('package', 'docker', stdlib::merge($docker_hash, {
ensure => $ensure,
source => $docker::package_source,
name => $docker::docker_ce_package_name,
}))
ensure_resource('package', 'docker-ce-cli', stdlib::merge($docker_hash, {
ensure => $ensure,
source => $docker::package_source,
name => $docker::docker_ce_cli_package_name,
}))
}
default : {
# Empty
}
}
} else {
if $facts['os']['family'] != 'windows' {
ensure_resource('package', 'docker', stdlib::merge($docker_hash, {
ensure => $ensure,
name => $docker::docker_package_name,
}))
if $ensure == 'absent' {
ensure_resource('package', $dependent_packages, {
ensure => $ensure,
})
}
} else {
if $ensure == 'absent' {
$remove_docker_parameters = {
'docker_ee_package_name' => $docker_ee_package_name,
'version' => $version,
}
$check_docker_parameters = {
'docker_ee_package_name' => $docker_ee_package_name,
}
exec { 'remove-docker-package':
command => epp('docker/windows/remove_docker.ps1.epp', $remove_docker_parameters),
provider => powershell,
unless => epp('docker/windows/check_docker.ps1.epp', $check_docker_parameters),
logoutput => true,
}
} else {
if $docker::package_location {
$download_docker_parameters = {
'docker_download_url' => $docker_download_url,
}
$check_docker_url_parameters = {
'docker_download_url' => $docker_download_url,
}
exec { 'install-docker-package':
command => epp('docker/windows/download_docker.ps1.epp', $download_docker_parameters),
provider => powershell,
unless => epp('docker/windows/check_docker_url.ps1.epp', $check_docker_url_parameters),
logoutput => true,
notify => Exec['service-restart-on-failure'],
}
} else {
$install_powershell_provider_parameters = {
'nuget_package_provider_version' => $nuget_package_provider_version,
'docker_msft_provider_version' => $docker_msft_provider_version,
'version' => $version,
}
$check_powershell_provider_parameters = {
'nuget_package_provider_version' => $nuget_package_provider_version,
'docker_msft_provider_version' => $docker_msft_provider_version,
'docker_ee_package_name' => $docker_ee_package_name,
'version' => $version,
}
exec { 'install-docker-package':
command => epp('docker/windows/install_powershell_provider.ps1.epp', $install_powershell_provider_parameters),
provider => powershell,
unless => epp('docker/windows/check_powershell_provider.ps1.epp', $check_powershell_provider_parameters),
logoutput => true,
timeout => 1800,
notify => Exec['service-restart-on-failure'],
}
}
exec { 'service-restart-on-failure':
command => 'SC.exe failure Docker reset= 432000 actions= restart/30000/restart/60000/restart/60000',
refreshonly => true,
logoutput => true,
provider => powershell,
}
}
}
}
}
}

View file

@ -0,0 +1,108 @@
# @summary
# install Docker Machine using the recommended curl command.
#
# @param ensure
# Whether to install or remove Docker Machine
# Valid values are absent present
#
# @param version
# The version of Docker Machine to install.
#
# @param install_path
# The path where to install Docker Machine.
#
# @param proxy
# Proxy to use for downloading Docker Machine.
#
# @param url
# The URL from which the docker machine binary should be fetched
#
# @param curl_ensure
# Whether or not the curl package is ensured by this module.
#
class docker::machine (
Enum[present,absent] $ensure = 'present',
Optional[String] $version = $docker::params::machine_version,
Optional[String] $install_path = $docker::params::machine_install_path,
Optional[Pattern['^((http[s]?)?:\/\/)?([^:^@]+:[^:^@]+@|)([\da-z\.-]+)\.([\da-z\.]{2,6})(:[\d])?([\/\w \.-]*)*\/?$']] $proxy = undef,
Optional[Variant[Stdlib::HTTPUrl, Stdlib::HTTPSUrl]] $url = undef,
Optional[Boolean] $curl_ensure = $docker::params::curl_ensure,
) inherits docker::params {
if $facts['os']['family'] == 'windows' {
$file_extension = '.exe'
$file_owner = 'Administrator'
} else {
$file_extension = ''
$file_owner = 'root'
}
$docker_machine_location = "${install_path}/docker-machine${file_extension}"
$docker_machine_location_versioned = "${install_path}/docker-machine-${version}${file_extension}"
if $ensure == 'present' {
$docker_machine_url = $url ? {
undef => "https://github.com/docker/machine/releases/download/v${version}/docker-machine-${facts['kernel']}-x86_64${file_extension}",
default => $url,
}
if $proxy != undef {
$proxy_opt = "--proxy ${proxy}"
} else {
$proxy_opt = ''
}
if $facts['os']['family'] == 'windows' {
$docker_download_command = "if (Invoke-WebRequest ${docker_machine_url} ${proxy_opt} -UseBasicParsing -OutFile \"${docker_machine_location_versioned}\") { exit 0 } else { exit 1}" # lint:ignore:140chars
$parameters = {
'proxy' => $proxy,
'docker_machine_url' => $docker_machine_url,
'docker_machine_location_versioned' => $docker_machine_location_versioned,
}
exec { "Install Docker Machine ${version}":
command => epp('docker/windows/download_docker_machine.ps1.epp', $parameters),
provider => powershell,
creates => $docker_machine_location_versioned,
}
file { $docker_machine_location:
ensure => 'link',
target => $docker_machine_location_versioned,
require => Exec["Install Docker Machine ${version}"],
}
} else {
if $curl_ensure {
stdlib::ensure_packages(['curl'])
}
exec { "Install Docker Machine ${version}":
path => '/usr/bin/',
cwd => '/tmp',
command => "curl -s -S -L ${proxy_opt} ${docker_machine_url} -o ${docker_machine_location_versioned}",
creates => $docker_machine_location_versioned,
require => Package['curl'],
}
file { $docker_machine_location_versioned:
owner => $file_owner,
mode => '0755',
require => Exec["Install Docker Machine ${version}"],
}
file { $docker_machine_location:
ensure => 'link',
target => $docker_machine_location_versioned,
require => File[$docker_machine_location_versioned],
}
}
} else {
file { $docker_machine_location_versioned:
ensure => absent,
}
file { $docker_machine_location:
ensure => absent,
}
}
}

View file

@ -0,0 +1,11 @@
# @summary
#
# @param networks
#
class docker::networks (
Optional[Hash[String, Hash]] $networks = undef,
) {
if $networks {
create_resources(docker_network, $networks)
}
}

View file

@ -0,0 +1,387 @@
# @summary Default parameter values for the docker module
#
class docker::params {
$version = undef
$ensure = present
$docker_ce_start_command = 'dockerd'
$docker_ce_package_name = 'docker-ce'
$docker_ce_cli_package_name = 'docker-ce-cli'
$docker_engine_start_command = 'docker daemon'
$docker_engine_package_name = 'docker-engine'
$docker_ce_channel = stable
$docker_ee = false
$docker_ee_start_command = 'dockerd'
$docker_ee_source_location = undef
$docker_ee_key_source = undef
$docker_ee_key_id = undef
$docker_ee_repos = stable
$tcp_bind = undef
$tls_enable = false
$tls_verify = true
$machine_version = '0.16.1'
$ip_forward = true
$iptables = true
$ipv6 = false
$ipv6_cidr = undef
$default_gateway_ipv6 = undef
$icc = undef
$ip_masq = true
$bip = undef
$mtu = undef
$fixed_cidr = undef
$bridge = undef
$default_gateway = undef
$socket_bind = 'unix:///var/run/docker.sock'
$log_level = undef
$log_driver = undef
$log_opt = []
$selinux_enabled = undef
$socket_group_default = 'docker'
$labels = []
$service_state = running
$service_enable = true
$manage_service = true
$root_dir = undef
$tmp_dir_config = true
$tmp_dir = '/tmp/'
$dns = undef
$dns_search = undef
$proxy = undef
$compose_version = undef
$no_proxy = undef
$execdriver = undef
$storage_driver = undef
$dm_basesize = undef
$dm_fs = undef
$dm_mkfsarg = undef
$dm_mountopt = undef
$dm_blocksize = undef
$dm_loopdatasize = undef
$dm_loopmetadatasize = undef
$dm_datadev = undef
$dm_metadatadev = undef
$dm_thinpooldev = undef
$dm_use_deferred_removal = undef
$dm_use_deferred_deletion = undef
$dm_blkdiscard = undef
$dm_override_udev_sync_check = undef
$overlay2_override_kernel_check = false
$manage_package = true
$package_source = undef
$service_name_default = 'docker'
$docker_group_default = 'docker'
$storage_devs = undef
$storage_vg = undef
$storage_root_size = undef
$storage_data_size = undef
$storage_min_data_size = undef
$storage_chunk_size = undef
$storage_growpart = undef
$storage_auto_extend_pool = undef
$storage_pool_autoextend_threshold = undef
$storage_pool_autoextend_percent = undef
$storage_config_template = 'docker/etc/sysconfig/docker-storage.epp'
$registry_mirror = undef
$curl_ensure = true
$os_lc = downcase($facts['os']['name'])
$docker_msft_provider_version = undef
$nuget_package_provider_version = undef
$docker_command = 'docker'
if ($facts['os']['family'] == 'windows') {
$docker_ee_package_name = 'Docker'
$machine_install_path = "${facts['docker_program_files_path']}/Docker"
$tls_cacert = "${facts['docker_program_data_path']}/docker/certs.d/ca.pem"
$tls_cert = "${facts['docker_program_data_path']}/docker/certs.d/server-cert.pem"
$tls_key = "${facts['docker_program_data_path']}/docker/certs.d/server-key.pem"
} else {
$docker_ee_package_name = 'docker-ee'
$machine_install_path = '/usr/local/bin'
$tls_cacert = '/etc/docker/tls/ca.pem'
$tls_cert = '/etc/docker/tls/cert.pem'
$tls_key = '/etc/docker/tls/key.pem'
}
case $facts['os']['family'] {
'Debian' : {
case $facts['os']['name'] {
'Ubuntu' : {
$package_release = "ubuntu-${facts['os']['distro']['codename']}"
if (versioncmp($facts['os']['release']['full'], '15.04') >= 0) {
$service_after_override = undef
$service_config_template = 'docker/etc/sysconfig/docker.systemd.epp'
$service_hasrestart = true
$service_hasstatus = true
$service_overrides_template = 'docker/etc/systemd/system/docker.service.d/service-overrides-debian.conf.epp'
$service_provider = 'systemd'
$socket_override = false
$socket_overrides_template = 'docker/etc/systemd/system/docker.socket.d/socket-overrides.conf.epp'
$storage_config = '/etc/default/docker-storage'
include docker::systemd_reload
} else {
$service_config_template = 'docker/etc/default/docker.epp'
$service_overrides_template = undef
$socket_overrides_template = undef
$socket_override = false
$service_after_override = undef
$service_provider = 'upstart'
$service_hasstatus = true
$service_hasrestart = false
$storage_config = undef
}
}
default: {
if (versioncmp($facts['facterversion'], '2.4.6') <= 0) {
$package_release = "debian-${facts['os']['lsb']['distcodename']}"
} else {
$package_release = "debian-${facts['os']['distro']['codename']}"
}
$service_provider = 'systemd'
$storage_config = '/etc/default/docker-storage'
$service_config_template = 'docker/etc/sysconfig/docker.systemd.epp'
$service_overrides_template = 'docker/etc/systemd/system/docker.service.d/service-overrides-debian.conf.epp'
$socket_overrides_template = 'docker/etc/systemd/system/docker.socket.d/socket-overrides.conf.epp'
$socket_override = false
$service_after_override = undef
$service_hasstatus = true
$service_hasrestart = true
include docker::systemd_reload
}
}
$apt_source_pin_level = 500
$docker_group = $docker_group_default
$pin_upstream_package_source = true
$repo_opt = undef
$service_config = undef
$service_name = $service_name_default
$socket_group = $socket_group_default
$storage_setup_file = undef
$use_upstream_package_source = true
$package_ce_source_location = "https://download.docker.com/linux/${os_lc}"
$package_ce_key_source = "https://download.docker.com/linux/${os_lc}/gpg"
$package_ce_key_id = '9DC858229FC7DD38854AE2D88D81803C0EBFCD88'
if (versioncmp($facts['facterversion'], '2.4.6') <= 0) {
$package_ce_release = $facts['os']['lsb']['distcodename']
} else {
$package_ce_release = $facts['os']['distro']['codename']
}
$package_source_location = 'http://apt.dockerproject.org/repo'
$package_key_source = 'https://apt.dockerproject.org/gpg'
$package_key_check_source = undef
$package_key_id = '58118E89F3A912897C070ADBF76221572C52609D'
$package_ee_source_location = $docker_ee_source_location
$package_ee_key_source = $docker_ee_key_source
$package_ee_key_id = $docker_ee_key_id
if (versioncmp($facts['facterversion'], '2.4.6') <= 0) {
$package_ee_release = $facts['os']['lsb']['distcodename']
} else {
$package_ee_release = $facts['os']['distro']['codename']
}
$package_ee_repos = $docker_ee_repos
$package_ee_package_name = $docker_ee_package_name
if ($service_provider == 'systemd') {
$detach_service_in_init = false
} else {
$detach_service_in_init = true
}
}
'RedHat' : {
$service_after_override = undef
$service_config = '/etc/sysconfig/docker'
$service_config_template = 'docker/etc/sysconfig/docker.systemd.epp'
$service_hasrestart = true
$service_hasstatus = true
$service_overrides_template = 'docker/etc/systemd/system/docker.service.d/service-overrides-rhel.conf.epp'
$service_provider = 'systemd'
$socket_override = false
$socket_overrides_template = 'docker/etc/systemd/system/docker.socket.d/socket-overrides.conf.epp'
$storage_config = '/etc/sysconfig/docker-storage'
$storage_setup_file = '/etc/sysconfig/docker-storage-setup'
$use_upstream_package_source = true
$apt_source_pin_level = undef
$detach_service_in_init = false
$package_ce_key_id = undef
$package_ce_key_source = 'https://download.docker.com/linux/centos/gpg'
$package_ce_release = undef
$package_ce_source_location = "https://download.docker.com/linux/centos/${facts['os']['release']['major']}/${facts['os']['architecture']}/${docker_ce_channel}"
$package_ee_key_id = $docker_ee_key_id
$package_ee_key_source = $docker_ee_key_source
$package_ee_package_name = $docker_ee_package_name
$package_ee_release = undef
$package_ee_repos = $docker_ee_repos
$package_ee_source_location = $docker_ee_source_location
$package_key_check_source = true
$package_key_id = undef
$package_key_source = 'https://yum.dockerproject.org/gpg'
$package_release = undef
$package_source_location = "https://yum.dockerproject.org/repo/main/centos/${facts['os']['release']['major']}"
$pin_upstream_package_source = undef
$service_name = $service_name_default
if $use_upstream_package_source {
$docker_group = $docker_group_default
$socket_group = $socket_group_default
} else {
$docker_group = 'dockerroot'
$socket_group = 'dockerroot'
}
$repo_opt = undef
}
'windows' : {
$msft_nuget_package_provider_version = $nuget_package_provider_version
$msft_provider_version = $docker_msft_provider_version
$msft_package_version = $version
$service_config_template = 'docker/windows/config/daemon.json.epp'
$service_config = "${facts['docker_program_data_path']}/docker/config/daemon.json"
$docker_group = 'docker'
$package_ce_source_location = undef
$package_ce_key_source = undef
$package_ce_key_id = undef
$package_ce_repos = undef
$package_ce_release = undef
$package_key_id = undef
$package_release = undef
$package_source_location = undef
$package_key_source = undef
$package_key_check_source = undef
$package_ee_source_location = undef
$package_ee_package_name = $docker_ee_package_name
$package_ee_key_source = undef
$package_ee_key_id = undef
$package_ee_repos = undef
$package_ee_release = undef
$use_upstream_package_source = undef
$pin_upstream_package_source = undef
$apt_source_pin_level = undef
$socket_group = undef
$service_name = $service_name_default
$repo_opt = undef
$storage_config = undef
$storage_setup_file = undef
$service_provider = undef
$service_overrides_template = undef
$socket_overrides_template = undef
$socket_override = false
$service_after_override = undef
$service_hasstatus = undef
$service_hasrestart = undef
$detach_service_in_init = true
}
'Suse': {
$docker_group = $docker_group_default
$socket_group = $socket_group_default
$package_key_source = undef
$package_key_check_source = undef
$package_source_location = undef
$package_key_id = undef
$package_repos = undef
$package_release = undef
$package_ce_key_source = undef
$package_ce_source_location = undef
$package_ce_key_id = undef
$package_ce_repos = undef
$package_ce_release = undef
$package_ee_source_location = undef
$package_ee_key_source = undef
$package_ee_key_id = undef
$package_ee_release = undef
$package_ee_repos = undef
$package_ee_package_name = undef
$use_upstream_package_source = true
$service_overrides_template = undef
$socket_overrides_template = undef
$socket_override = false
$service_after_override = undef
$service_hasstatus = undef
$service_hasrestart = undef
$service_provider = 'systemd'
$package_name = $docker_ce_package_name
$service_name = $service_name_default
$detach_service_in_init = true
$repo_opt = undef
$nowarn_kernel = false
$service_config = undef
$storage_config = undef
$storage_setup_file = undef
$service_config_template = undef
$pin_upstream_package_source = undef
$apt_source_pin_level = undef
}
default: {
$docker_group = $docker_group_default
$socket_group = $socket_group_default
$package_key_source = undef
$package_key_check_source = undef
$package_source_location = undef
$package_key_id = undef
$package_repos = undef
$package_release = undef
$package_ce_key_source = undef
$package_ce_source_location = undef
$package_ce_key_id = undef
$package_ce_repos = undef
$package_ce_release = undef
$package_ee_source_location = undef
$package_ee_key_source = undef
$package_ee_key_id = undef
$package_ee_release = undef
$package_ee_repos = undef
$package_ee_package_name = undef
$use_upstream_package_source = true
$service_overrides_template = undef
$socket_overrides_template = undef
$socket_override = false
$service_after_override = undef
$service_hasstatus = undef
$service_hasrestart = undef
$service_provider = undef
$package_name = $docker_ce_package_name
$service_name = $service_name_default
$detach_service_in_init = true
$repo_opt = undef
$nowarn_kernel = false
$service_config = undef
$storage_config = undef
$storage_setup_file = undef
$service_config_template = undef
$pin_upstream_package_source = undef
$apt_source_pin_level = undef
}
}
# Special extra packages are required on some OSes.
# Specifically apparmor is needed for Ubuntu:
# https://github.com/docker/docker/issues/4734
$prerequired_packages = $facts['os']['family'] ? {
'Debian' => $facts['os']['name'] ? {
'Debian' => ['cgroupfs-mount',],
'Ubuntu' => ['cgroup-lite', 'apparmor',],
default => [],
},
'RedHat' => ['device-mapper'],
default => [],
}
$dependent_packages = [$docker_ce_cli_package_name, 'containerd.io',]
if($service_provider == 'systemd') {
# systemd v230 adds new StartLimitIntervalSec, StartLimitBurst
if($facts['os']['family'] == 'RedHat' and versioncmp($facts['os']['release']['major'], '8') < 0) {
$have_systemd_v230 = false
} elsif($facts['os']['name'] == 'Ubuntu' and versioncmp($facts['os']['release']['major'], '18.04') < 0) {
$have_systemd_v230 = false
} elsif($facts['os']['name'] == 'Debian' and versioncmp($facts['os']['release']['major'], '9') < 0) {
$have_systemd_v230 = false
} else {
$have_systemd_v230 = true
}
} else {
$have_systemd_v230 = false
}
}

View file

@ -0,0 +1,122 @@
# @summary
# A define that manages a docker plugin
#
# @param plugin_name
# This ensures whether the plugin is installed or not.
# Note that the default behaviour of docker plugin
# requires a plugin be disabled before it can be removed
#
# @param enabled
# A setting to enable or disable an installed plugin.
#
# @param timeout
# The number of seconds to wait when enabling a plugin
#
# @param plugin_alias
# An alternative name to use for an installed plugin
#
# @param disable_on_install
# Alters the default behaviour of enabling a plugin upon install
#
# @param disable_content_trust
# Skip image verification
#
# @param grant_all_permissions
# Grant all permissions necessary to run the plugin
#
# @param force_remove
# Force the removal of an active plugin
#
# @param settings
# Any additional settings to pass to the plugin during install
#
# @param ensure
#
define docker::plugin (
Enum[present,absent] $ensure = 'present',
String $plugin_name = $title,
Boolean $enabled = true,
Optional[String] $timeout = undef,
Optional[String] $plugin_alias = undef,
Boolean $disable_on_install = false,
Boolean $disable_content_trust = true,
Boolean $grant_all_permissions = true,
Boolean $force_remove = true,
Array $settings = [],
) {
include docker::params
$docker_command = "${docker::params::docker_command} plugin"
if ($facts['os']['family'] == 'windows') {
fail('Feature not implemented on windows.')
}
if $ensure == 'present' {
$docker_plugin_install_flags = docker_plugin_install_flags({
plugin_name => $plugin_name,
plugin_alias => $plugin_alias,
disable_on_install => $disable_on_install,
disable_content_trust => $disable_content_trust,
grant_all_permissions => $grant_all_permissions,
settings => $settings,
}
)
$exec_install = "${docker_command} install ${docker_plugin_install_flags}"
$unless_install = "${docker_command} ls --format='{{.PluginReference}}' | grep -w ${plugin_name}"
exec { "plugin install ${plugin_name}":
command => $exec_install,
environment => 'HOME=/root',
path => ['/bin', '/usr/bin'],
timeout => 0,
unless => $unless_install,
}
} elsif $ensure == 'absent' {
$docker_plugin_remove_flags = docker_plugin_remove_flags({
plugin_name => $plugin_name,
force_remove => $force_remove,
}
)
$exec_rm = "${docker_command} rm ${docker_plugin_remove_flags}"
$onlyif_rm = "${docker_command} ls --format='{{.PluginReference}}' | grep -w ${plugin_name}"
exec { "plugin remove ${plugin_name}":
command => $exec_rm,
environment => 'HOME=/root',
path => ['/bin', '/usr/bin'],
timeout => 0,
onlyif => $onlyif_rm,
}
}
if $enabled {
$docker_plugin_enable_flags = docker_plugin_enable_flags({
plugin_name => $plugin_name,
plugin_alias => $plugin_alias,
timeout => $timeout,
}
)
$exec_enable = "${docker_command} enable ${docker_plugin_enable_flags}"
$onlyif_enable = "${docker_command} ls -f enabled=false --format='{{.PluginReference}}' | grep -w ${plugin_name}"
exec { "plugin enable ${plugin_name}":
command => $exec_enable,
environment => 'HOME=/root',
path => ['/bin', '/usr/bin'],
timeout => 0,
onlyif => $onlyif_enable,
}
} elsif $enabled == false {
exec { "disable ${plugin_name}":
command => "${docker_command} disable ${plugin_name}",
environment => 'HOME=/root',
path => ['/bin', '/usr/bin',],
timeout => 0,
unless => "${docker_command} ls -f enabled=false --format='{{.PluginReference}}' | grep -w ${plugin_name}",
}
}
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param plugins
#
class docker::plugins (
Hash $plugins
) {
create_resources(docker::plugin, $plugins)
}

View file

@ -0,0 +1,159 @@
# @summary
# Module to configure private docker registries from which to pull Docker images
#
# @param server
# The hostname and port of the private Docker registry. Ex: dockerreg:5000
#
# @param ensure
# Whether or not you want to login or logout of a repository
#
# @param username
# Username for authentication to private Docker registry.
# auth is not required.
#
# @param password
# Password for authentication to private Docker registry. Leave undef if
# auth is not required.
#
# @param pass_hash
# The hash to be used for receipt. If left as undef, a hash will be generated
#
# @param email
# Email for registration to private Docker registry. Leave undef if
# auth is not required.
#
# @param local_user
# The local user to log in as. Docker will store credentials in this
# users home directory
#
# @param local_user_home
# The local user home directory.
#
# @param receipt
# Required to be true for idempotency
#
# @param version
#
define docker::registry (
Optional[String] $server = $title,
Enum[present,absent] $ensure = 'present',
Optional[String] $username = undef,
Optional[String] $password = undef,
Optional[String] $pass_hash = undef,
Optional[String] $email = undef,
String $local_user = 'root',
Optional[String] $local_user_home = undef,
Optional[String] $version = $docker::version,
Boolean $receipt = true,
) {
include docker::params
$docker_command = $docker::params::docker_command
if $facts['os']['family'] == 'windows' {
$exec_environment = ["PATH=${facts['docker_program_files_path']}/Docker/",]
$exec_timeout = 3000
$exec_path = ["${facts['docker_program_files_path']}/Docker/",]
$exec_provider = 'powershell'
$password_env = '$env:password'
$exec_user = undef
} else {
$exec_environment = []
$exec_path = ['/bin', '/usr/bin',]
$exec_timeout = 0
$exec_provider = undef
$password_env = "\${password}"
$exec_user = $local_user
if $local_user_home {
$_local_user_home = $local_user_home
} else {
# set sensible default
$_local_user_home = ($local_user == 'root') ? {
true => '/root',
default => "/home/${local_user}",
}
}
}
if $ensure == 'present' {
if $username != undef and $password != undef and $email != undef and $version != undef and $version =~ /1[.][1-9]0?/ {
$auth_cmd = "${docker_command} login -u '${username}' -p \"${password_env}\" -e '${email}' ${server}"
$auth_environment = "password=${password}"
} elsif $username != undef and $password != undef {
$auth_cmd = "${docker_command} login -u '${username}' -p \"${password_env}\" ${server}"
$auth_environment = "password=${password}"
} else {
$auth_cmd = "${docker_command} login ${server}"
$auth_environment = ''
}
} else {
$auth_cmd = "${docker_command} logout ${server}"
$auth_environment = ''
}
$docker_auth = "${title}${auth_environment}${auth_cmd}${local_user}"
if $auth_environment != '' {
$exec_env = concat($exec_environment, $auth_environment, "docker_auth=${docker_auth}")
} else {
$exec_env = concat($exec_environment, "docker_auth=${docker_auth}")
}
if $receipt {
if $facts['os']['family'] != 'windows' {
# server may be an URI, which can contain /
$server_strip = regsubst($server, '/', '_', 'G')
# no - with pw_hash
$local_user_strip = regsubst($local_user, '[-_]', '', 'G')
$_pass_hash = $pass_hash ? {
Undef => pw_hash($docker_auth, 'SHA-512', $local_user_strip),
default => $pass_hash
}
$_auth_command = "${auth_cmd} || (rm -f \"/${_local_user_home}/registry-auth-puppet_receipt_${server_strip}_${local_user}\"; exit 1;)"
file { "/${_local_user_home}/registry-auth-puppet_receipt_${server_strip}_${local_user}":
ensure => $ensure,
content => $_pass_hash,
owner => $local_user,
group => $local_user,
notify => Exec["${title} auth"],
}
} else {
# server may be an URI, which can contain /
$server_strip = regsubst($server, '[/:]', '_', 'G')
$passfile = "${facts['docker_user_temp_path']}/registry-auth-puppet_receipt_${server_strip}_${local_user}"
$_auth_command = "if (-not (${auth_cmd})) { Remove-Item -Path ${passfile} -Force -Recurse -EA SilentlyContinue; exit 1 } else { exit 0 }" # lint:ignore:140chars
if $ensure == 'absent' {
file { $passfile:
ensure => $ensure,
notify => Exec["${title} auth"],
}
} elsif $ensure == 'present' {
exec { 'compute-hash':
command => stdlib::deferrable_epp('docker/windows/compute_hash.ps1.epp', { 'passfile' => $passfile }),
environment => Deferred('docker::env', [$exec_env]),
provider => $exec_provider,
logoutput => true,
unless => stdlib::deferrable_epp('docker/windows/check_hash.ps1.epp', { 'passfile' => $passfile }),
notify => Exec["${title} auth"],
}
}
}
} else {
$_auth_command = $auth_cmd
}
exec { "${title} auth":
environment => Deferred('docker::env', [$exec_env]),
command => Deferred('sprintf', [$_auth_command]),
user => $exec_user,
path => $exec_path,
timeout => $exec_timeout,
provider => $exec_provider,
refreshonly => $receipt,
}
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param registries
#
class docker::registry_auth (
Hash $registries
) {
create_resources(docker::registry, $registries)
}

View file

@ -0,0 +1,90 @@
# @summary
#
# @param location
#
# @param key_source
#
# @param key_check_source
#
# @param architecture
#
class docker::repos (
Optional[String] $location = $docker::package_location,
Optional[String] $key_source = $docker::package_key_source,
Optional[Boolean] $key_check_source = $docker::package_key_check_source,
String $architecture = $facts['os']['architecture'],
) {
stdlib::ensure_packages($docker::prerequired_packages)
case $facts['os']['family'] {
'Debian': {
$release = $docker::release
$package_key = $docker::package_key
$package_repos = $docker::package_repos
if ($docker::use_upstream_package_source) {
apt::source { 'docker':
location => $location,
architecture => $architecture,
release => $release,
repos => $package_repos,
key => {
id => $package_key,
source => $key_source,
},
include => {
src => false,
},
}
$url_split = split($location, '/')
$repo_host = $url_split[2]
$pin_ensure = $docker::pin_upstream_package_source ? {
true => 'present',
default => 'absent',
}
apt::pin { 'docker':
ensure => $pin_ensure,
origin => $repo_host,
priority => $docker::apt_source_pin_level,
}
if $docker::manage_package {
include apt
if (versioncmp($facts['facterversion'], '2.4.6') <= 0) {
if $facts['os']['name'] == 'Debian' and $facts['os']['lsb']['distcodename'] == 'wheezy' {
include apt::backports
}
} else {
if $facts['os']['name'] == 'Debian' and $facts['os']['distro']['codename'] == 'wheezy' {
include apt::backports
}
}
Exec['apt_update'] -> Package[$docker::prerequired_packages]
Apt::Source['docker'] -> Package['docker']
}
}
}
'RedHat': {
if ($docker::manage_package) {
$baseurl = $location
$gpgkey = $key_source
$gpgkey_check = $key_check_source
if ($docker::use_upstream_package_source) {
yumrepo { 'docker':
descr => 'Docker',
baseurl => $baseurl,
gpgkey => $gpgkey,
gpgcheck => $gpgkey_check,
}
Yumrepo['docker'] -> Package['docker']
}
}
}
default: {}
}
}

View file

@ -0,0 +1,752 @@
# @summary
# A define which manages a running docker container.
#
# @param restart
# Sets a restart policy on the docker run.
# Note: If set, puppet will NOT setup an init script to manage, instead
# it will do a raw docker run command using a CID file to track the container
# ID.
#
# If you want a normal named container with an init script and a restart policy
# you must use the extra_parameters feature and pass it in like this:
#
# extra_parameters => ['--restart=always']
#
# However, if your system is using sytemd this restart policy will be
# ineffective because the ExecStop commands will run which will cause
# docker to stop restarting it. In this case you should use the
# systemd_restart option to specify the policy you want.
#
# This will allow the docker container to be restarted if it dies, without
# puppet help.
#
# @param verify_digest
# (optional) Make sure, that the image has not modified. Compares the digest
# checksum before starting the docker image.
# To get the digest of an image, run the following command:
# docker image inspect <<image>> --format='{{index .RepoDigests 0}}
#
# @param service_prefix
# (optional) The name to prefix the startup script with and the Puppet
# service resource title with. Default: 'docker-'
#
# @param restart_service
# (optional) Whether or not to restart the service if the the generated init
# script changes. Default: true
#
# @param restart_service_on_docker_refresh
# Whether or not to restart the service if the docker service is restarted.
# Only has effect if the docker_service parameter is set.
# Default: true
#
# @param manage_service
# (optional) Whether or not to create a puppet Service resource for the init
# script. Disabling this may be useful if integrating with existing modules.
# Default: true
#
# @param docker_service
# (optional) If (and how) the Docker service itself is managed by Puppet
# true -> Service['docker']
# false -> no Service dependency
# anything else -> Service[docker_service]
# Default: false
#
# @param health_check_cmd
# (optional) Specifies the command to execute to check that the container is healthy using the docker health check functionality.
# Default: undef
#
# @param health_check_interval
# (optional) Specifies the interval that the health check command will execute in seconds.
# Default: undef
#
# @param restart_on_unhealthy
# (optional) Checks the health status of Docker container and if it is unhealthy the service will be restarted.
# The health_check_cmd parameter must be set to true to use this functionality.
# Default: undef
#
# @param net
#
# The docker network to attach to a container.
# Can be a String or Array (if using multiple networks)
# Default: bridge
#
# @param extra_parameters
# An array of additional command line arguments to pass to the `docker run`
# command. Useful for adding additional new or experimental options that the
# module does not yet support.
#
# @param systemd_restart
# (optional) If the container is to be managed by a systemd unit file set the
# Restart option on the unit file. Can be any valid value for this systemd
# configuration. Most commonly used are on-failure or always.
# Default: on-failure
#
# @param custom_unless
# (optional) Specify an additional unless for the Docker run command when using restart.
# Default: undef
#
# @param after_create
# (optional) Specifies the command to execute after container is created but before it is started.
# Default: undef
#
# @param remain_after_exit
# (optional) If the container is to be managed by a systemd unit file set the
# RemainAfterExit option on the unit file. Can be any valid value for this systemd
# configuration.
# Default: Not included in unit file
#
# @param prepare_service_only
# (optional) Prepare the service and enable it as usual but do not run it right away.
# Useful when building VM images using masterless Puppet and then letting the Docker images
# to be downloaded when a new VM is created.
# Default: false
#
# @param image
#
# @param ensure
#
# @param command
#
# @param memory_limit
#
# @param cpuset
#
# @param ports
#
# @param labels
#
# @param expose
#
# @param volumes
#
# @param links
#
# @param use_name
#
# @param running
#
# @param volumes_from
#
# @param username
#
# @param hostname
#
# @param env
#
# @param env_file
#
# @param dns
#
# @param dns_search
#
# @param lxc_conf
#
# @param service_provider
#
# @param disable_network
#
# @param privileged
#
# @param detach
#
# @param extra_systemd_parameters
#
# @param pull_on_start
#
# @param after
#
# @param after_service
#
# @param depends
#
# @param depend_services
#
# @param tty
#
# @param socket_connect
#
# @param hostentries
#
# @param before_start
#
# @param before_stop
#
# @param after_start
#
# @param after_stop
#
# @param remove_container_on_start
#
# @param remove_container_on_stop
#
# @param remove_volume_on_start
#
# @param remove_volume_on_stop
#
# @param stop_wait_time
#
# @param syslog_identifier
#
# @param syslog_facility
#
# @param read_only
#
define docker::run (
Optional[Pattern[/^[\S]*$/]] $image = undef,
Enum[present,absent] $ensure = 'present',
Optional[String] $verify_digest = undef,
Optional[String] $command = undef,
Pattern[/^[\d]*(b|k|m|g)$/] $memory_limit = '0b',
Variant[String,Array,Undef] $cpuset = [],
Variant[String,Array,Undef] $ports = [],
Variant[String,Array,Undef] $labels = [],
Variant[String,Array,Undef] $expose = [],
Variant[String,Array,Undef] $volumes = [],
Variant[String,Array,Undef] $links = [],
Boolean $use_name = false,
Boolean $running = true,
Variant[String,Array] $volumes_from = [],
Variant[String,Array[String[1],1],Undef] $net = undef,
Variant[String,Boolean] $username = false,
Variant[String,Boolean] $hostname = false,
Variant[String,Array] $env = [],
Variant[String,Array] $env_file = [],
Variant[String,Array] $dns = [],
Variant[String,Array] $dns_search = [],
Variant[String,Array] $lxc_conf = [],
String $service_prefix = 'docker-',
Optional[String] $service_provider = undef,
Boolean $restart_service = true,
Boolean $restart_service_on_docker_refresh = true,
Boolean $manage_service = true,
Variant[String,Boolean] $docker_service = false,
Boolean $disable_network = false,
Boolean $privileged = false,
Optional[Boolean] $detach = undef,
Optional[Variant[String,Array[String]]] $extra_parameters = undef,
String $systemd_restart = 'on-failure',
Variant[String,Hash] $extra_systemd_parameters = {},
Boolean $pull_on_start = false,
Variant[String,Array] $after = [],
Variant[String,Array] $after_service = [],
Variant[String,Array] $depends = [],
Variant[String,Array] $depend_services = ['docker.service'],
Boolean $tty = false,
Variant[String,Array] $socket_connect = [],
Variant[String,Array] $hostentries = [],
Optional[String] $restart = undef,
Variant[String,Boolean] $before_start = false,
Variant[String,Boolean] $before_stop = false,
Variant[String,Boolean] $after_start = false,
Variant[String,Boolean] $after_stop = false,
Optional[String] $after_create = undef,
Boolean $remove_container_on_start = true,
Boolean $remove_container_on_stop = true,
Boolean $remove_volume_on_start = false,
Boolean $remove_volume_on_stop = false,
Integer $stop_wait_time = 10,
Optional[String] $syslog_identifier = undef,
Optional[String] $syslog_facility = undef,
Boolean $read_only = false,
Optional[String] $health_check_cmd = undef,
Boolean $restart_on_unhealthy = false,
Optional[Integer] $health_check_interval = undef,
Variant[String,Array] $custom_unless = [],
Optional[String] $remain_after_exit = undef,
Boolean $prepare_service_only = false,
) {
include docker::params
if ($socket_connect != []) {
$sockopts = join(any2array($socket_connect), ',')
$docker_command = "${docker::params::docker_command} -H ${sockopts}"
} else {
$docker_command = $docker::params::docker_command
}
$service_name = $docker::service_name
$docker_group = $docker::docker_group
if $restart {
assert_type(Pattern[/^(no|always|unless-stopped|on-failure)|^on-failure:[\d]+$/], $restart)
}
if ($remove_volume_on_start and !$remove_container_on_start) {
fail("In order to remove the volume on start for ${title} you need to also remove the container")
}
if ($remove_volume_on_stop and !$remove_container_on_stop) {
fail("In order to remove the volume on stop for ${title} you need to also remove the container")
}
if $use_name {
notify { "docker use_name warning: ${title}":
message => 'The use_name parameter is no-longer required and will be removed in a future release',
withpath => true,
}
}
if $systemd_restart {
assert_type(Pattern[/^(no|always|on-success|on-failure|on-abnormal|on-abort|on-watchdog)$/], $systemd_restart)
}
$service_provider_real = $service_provider ? {
undef => $docker::params::service_provider,
default => $service_provider,
}
if $detach == undef {
$valid_detach = $service_provider_real ? {
'systemd' => false,
default => $docker::params::detach_service_in_init,
}
} else {
$valid_detach = $detach
}
$extra_parameters_array = any2array($extra_parameters)
$after_array = any2array($after)
$depends_array = any2array($depends)
$depend_services_array = any2array($depend_services)
$docker_run_flags = docker_run_flags({
cpuset => any2array($cpuset),
disable_network => $disable_network,
dns => any2array($dns),
dns_search => any2array($dns_search),
env => any2array($env),
env_file => any2array($env_file),
expose => any2array($expose),
extra_params => any2array($extra_parameters),
hostentries => any2array($hostentries),
hostname => $hostname,
links => any2array($links),
lxc_conf => any2array($lxc_conf),
memory_limit => $memory_limit,
net => $net,
ports => any2array($ports),
labels => any2array($labels),
privileged => $privileged,
socket_connect => any2array($socket_connect),
tty => $tty,
username => $username,
volumes => any2array($volumes),
volumes_from => any2array($volumes_from),
read_only => $read_only,
health_check_cmd => $health_check_cmd,
restart_on_unhealthy => $restart_on_unhealthy,
health_check_interval => $health_check_interval,
osfamily => $facts['os']['family'],
}
)
$sanitised_title = docker::sanitised_name($title)
if empty($depends_array) {
$sanitised_depends_array = []
} else {
$sanitised_depends_array = docker::sanitised_name($depends_array)
}
if empty($after_array) {
$sanitised_after_array = []
} else {
$sanitised_after_array = docker::sanitised_name($after_array)
}
if $facts['os']['family'] == 'windows' {
$exec_environment = "PATH=${facts['docker_program_files_path']}/Docker/;${facts['docker_systemroot']}/System32/"
$exec_timeout = 3000
$exec_path = ["${facts['docker_program_files_path']}/Docker/"]
$exec_provider = 'powershell'
$cidfile = "${facts['docker_user_temp_path']}/${service_prefix}${sanitised_title}.cid"
$restart_check = "${docker_command} inspect ${sanitised_title} -f '{{ if eq \\\"unhealthy\\\" .State.Health.Status }} {{ .Name }}{{ end }}' | findstr ${sanitised_title}" # lint:ignore:140chars
$container_running_check = "\$state = ${docker_command} inspect ${sanitised_title} -f \"{{ .State.Running }}\"; if (\$state -ieq \"true\") { Exit 0 } else { Exit 1 }" # lint:ignore:140chars
} else {
$exec_environment = 'HOME=/root'
$exec_path = ['/bin', '/usr/bin']
$exec_timeout = 0
$exec_provider = undef
$cidfile = "/var/run/${service_prefix}${sanitised_title}.cid"
$restart_check = "${docker_command} inspect ${sanitised_title} -f '{{ if eq \"unhealthy\" .State.Health.Status }} {{ .Name }}{{ end }}' | grep ${sanitised_title}" # lint:ignore:140chars
$container_running_check = "${docker_command} inspect ${sanitised_title} -f \"{{ .State.Running }}\" | grep true" # lint:ignore:140chars
}
if $restart_on_unhealthy {
exec { "Restart unhealthy container ${title} with docker":
command => "${docker_command} restart ${sanitised_title}",
onlyif => $restart_check,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
}
if $restart {
if $ensure == 'absent' {
exec { "stop ${title} with docker":
command => "${docker_command} stop --time=${stop_wait_time} ${sanitised_title}",
onlyif => "${docker_command} inspect ${sanitised_title}",
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
exec { "remove ${title} with docker":
command => "${docker_command} rm -v ${sanitised_title}",
onlyif => "${docker_command} inspect ${sanitised_title}",
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
file { $cidfile:
ensure => absent,
}
} else {
$run_with_docker_command = [
"${docker_command} run -d ${docker_run_flags}",
"--name ${sanitised_title} --cidfile=${cidfile}",
"--restart=\"${restart}\" ${image} ${command}",
]
$inspect = ["${docker_command} inspect ${sanitised_title}",]
if $custom_unless {
$exec_unless = concat($custom_unless, $inspect)
} else {
$exec_unless = $inspect
}
if versioncmp($facts['puppetversion'], '6') < 0 {
exec { "run ${title} with docker":
command => join($run_with_docker_command, ' '),
unless => $exec_unless,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
if $running == false {
exec { "stop ${title} with docker":
command => "${docker_command} stop --time=${stop_wait_time} ${sanitised_title}",
onlyif => $container_running_check,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
} else {
exec { "start ${title} with docker":
command => "${docker_command} start ${sanitised_title}",
unless => $container_running_check,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
}
} else {
$docker_params_changed_args = {
sanitised_title => $sanitised_title,
osfamily => $facts['os']['family'],
command => join($run_with_docker_command, ' '),
cidfile => $cidfile,
image => $image,
volumes => $volumes,
ports => $ports,
stop_wait_time => $stop_wait_time,
container_running => $running,
# logfile_path => ($facts['os']['family'] == 'windows') ? {
# true => ::docker_user_temp_path,
# default => '/tmp',
# },
}
$detect_changes = Deferred('docker_params_changed', [$docker_params_changed_args])
notify { "${title}_docker_params_changed":
message => $detect_changes,
}
}
}
} else {
$run_start_parameters = {
'before_start' => $before_start,
'remove_container_on_start' => $remove_container_on_start,
'docker_command' => $docker_command,
'remove_volume_on_start' => $remove_volume_on_start,
'sanitised_title' => $sanitised_title,
'pull_on_start' => $pull_on_start,
'image' => $image,
'verify_digest' => $verify_digest,
'docker_run_flags' => $docker_run_flags,
'command' => $command,
'after_create' => $after_create,
'net' => $net,
'valid_detach' => $valid_detach,
'after_start' => $after_start,
}
$docker_run_inline_start = epp('docker/docker-run-start.epp', $run_start_parameters)
$run_stop_parameters = {
'before_stop' => $before_stop,
'docker_command' => $docker_command,
'stop_wait_time' => $stop_wait_time,
'sanitised_title' => $sanitised_title,
'remove_container_on_stop' => $remove_container_on_stop,
'remove_volume_on_stop' => $remove_volume_on_stop,
'after_stop' => $after_stop,
}
$docker_run_inline_stop = epp('docker/docker-run-stop.epp', $run_stop_parameters)
case $service_provider_real {
'systemd': {
$initscript = "/etc/systemd/system/${service_prefix}${sanitised_title}.service"
$startscript = "/usr/local/bin/docker-run-${sanitised_title}-start.sh"
$stopscript = "/usr/local/bin/docker-run-${sanitised_title}-stop.sh"
$startstop_template = 'docker/usr/local/bin/docker-run.sh.epp'
$init_template = 'docker/etc/systemd/system/docker-run.epp'
$mode = '0644'
$hasstatus = true
}
'upstart': {
$initscript = "/etc/init.d/${service_prefix}${sanitised_title}"
$init_template = 'docker/etc/init.d/docker-run.epp'
$mode = '0750'
$startscript = undef
$stopscript = undef
$startstop_template = undef
$hasstatus = true
}
default: {
if $facts['os']['family'] != 'windows' {
fail('Docker needs a Debian or RedHat based system.')
}
elsif $ensure == 'present' {
fail('Restart parameter is required for Windows')
}
$hasstatus = $docker::params::service_hasstatus
}
}
if $syslog_identifier {
$_syslog_identifier = $syslog_identifier
} else {
$_syslog_identifier = "${service_prefix}${sanitised_title}"
}
if $ensure == 'absent' {
if $facts['os']['family'] == 'windows' {
exec { "stop container ${service_prefix}${sanitised_title}":
command => "${docker_command} stop --time=${stop_wait_time} ${sanitised_title}",
onlyif => "${docker_command} inspect ${sanitised_title}",
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
notify => Exec["remove container ${service_prefix}${sanitised_title}"],
}
}
else {
service { "${service_prefix}${sanitised_title}":
ensure => false,
enable => false,
hasstatus => $hasstatus,
provider => $service_provider_real,
notify => Exec["remove container ${service_prefix}${sanitised_title}"],
}
}
exec { "remove container ${service_prefix}${sanitised_title}":
command => "${docker_command} rm -v ${sanitised_title}",
onlyif => "${docker_command} inspect ${sanitised_title}",
environment => $exec_environment,
path => $exec_path,
refreshonly => true,
provider => $exec_provider,
timeout => $exec_timeout,
}
if $facts['os']['family'] != 'windows' {
file { "/etc/systemd/system/${service_prefix}${sanitised_title}.service":
ensure => absent,
}
if ($startscript) {
file { $startscript:
ensure => absent,
}
}
if ($stopscript) {
file { $stopscript:
ensure => absent,
}
}
} else {
file { $cidfile:
ensure => absent,
}
}
} else {
if ($startscript) {
file { $startscript:
ensure => file,
content => epp($startstop_template, { 'script' => $docker_run_inline_start }),
seltype => 'container_runtime_exec_t',
owner => 'root',
group => $docker_group,
mode => '0770',
}
}
if ($stopscript) {
file { $stopscript:
ensure => file,
content => epp($startstop_template, { 'script' => $docker_run_inline_stop }),
seltype => 'container_runtime_exec_t',
owner => 'root',
group => $docker_group,
mode => '0770',
}
}
if $service_provider_real == 'systemd' {
$init_template_parameters = {
'depend_services_array' => $depend_services_array,
'sanitised_after_array' => $sanitised_after_array,
'service_prefix' => $service_prefix,
'sanitised_depends_array' => $sanitised_depends_array,
'title' => $title,
'have_systemd_v230' => $docker::params::have_systemd_v230,
'extra_systemd_parameters' => $extra_systemd_parameters,
'systemd_restart' => $systemd_restart,
'_syslog_identifier' => $_syslog_identifier,
'syslog_facility' => $syslog_facility,
'sanitised_title' => $sanitised_title,
'remain_after_exit' => $remain_after_exit,
'service_name' => $service_name,
}
} elsif $service_provider_real == 'upstart' {
$init_template_parameters = {
'sanitised_after_array' => $sanitised_after_array,
'service_prefix' => $service_prefix,
'sanitised_depends_array' => $sanitised_depends_array,
'depend_services_array' => $depend_services_array,
'docker_command' => $docker_command,
'sanitised_title' => $sanitised_title,
'docker_run_inline_start' => $docker_run_inline_start,
'docker_run_inline_stop' => $docker_run_inline_stop,
}
}
file { $initscript:
ensure => file,
content => epp($init_template, $init_template_parameters),
seltype => 'container_unit_file_t',
owner => 'root',
group => $docker_group,
mode => $mode,
}
if $manage_service {
if $running == false {
service { "${service_prefix}${sanitised_title}":
ensure => $running,
enable => false,
hasstatus => $hasstatus,
require => File[$initscript],
}
} else {
# Transition help from moving from CID based container detection to
# Name-based container detection. See #222 for context.
# This code should be considered temporary until most people have
# transitioned. - 2015-04-15
if $initscript == "/etc/init.d/${service_prefix}${sanitised_title}" {
# This exec sequence will ensure the old-style CID container is stopped
# before we replace the init script with the new-style.
$transition_onlyif = [
"/usr/bin/test -f /var/run/docker-${sanitised_title}.cid &&",
"/usr/bin/test -f /etc/init.d/${service_prefix}${sanitised_title}",
]
exec { "/bin/sh /etc/init.d/${service_prefix}${sanitised_title} stop":
onlyif => join($transition_onlyif, ' '),
require => [],
}
-> file { "/var/run/${service_prefix}${sanitised_title}.cid":
ensure => absent,
}
-> File[$initscript]
}
service { "${service_prefix}${sanitised_title}":
ensure => $running and !$prepare_service_only,
enable => true,
provider => $service_provider_real,
hasstatus => $hasstatus,
require => File[$initscript],
}
}
if $docker_service {
if $docker_service == true {
Service['docker'] -> Service["${service_prefix}${sanitised_title}"]
if $restart_service_on_docker_refresh == true {
Service['docker'] ~> Service["${service_prefix}${sanitised_title}"]
}
} else {
Service[$docker_service] -> Service["${service_prefix}${sanitised_title}"]
if $restart_service_on_docker_refresh == true {
Service[$docker_service] ~> Service["${service_prefix}${sanitised_title}"]
}
}
}
}
if $service_provider_real == 'systemd' and !$prepare_service_only {
exec { "docker-${sanitised_title}-systemd-reload":
path => ['/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/'],
command => 'systemctl daemon-reload',
refreshonly => true,
require => [
File[$initscript],
File[$startscript],
File[$stopscript],
],
subscribe => [
File[$initscript],
File[$startscript],
File[$stopscript],
],
}
Exec["docker-${sanitised_title}-systemd-reload"] -> Service <| title == "${service_prefix}${sanitised_title}" |>
}
if $restart_service {
if $startscript or $stopscript {
[File[$initscript], File[$startscript], File[$stopscript],] ~> Service <| title == "${service_prefix}${sanitised_title}" |>
}
else {
[File[$initscript],] ~> Service <| title == "${service_prefix}${sanitised_title}" |>
}
}
else {
if $startscript or $stopscript {
[File[$initscript], File[$startscript], File[$stopscript],] -> Service <| title == "${service_prefix}${sanitised_title}" |>
}
else {
[File[$initscript],] -> Service <| title == "${service_prefix}${sanitised_title}" |>
}
}
}
}
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param instance
#
class docker::run_instance (
Hash $instance
) {
create_resources(docker::run, $instance)
}

View file

@ -0,0 +1,524 @@
# @summary manage the docker service daemon
#
# @param tcp_bind
# Which tcp port, if any, to bind the docker service to.
#
# @param ip_forward
# This flag interacts with the IP forwarding setting on
# your host system's kernel
#
# @param iptables
# Enable Docker's addition of iptables rules
#
# @param ip_masq
# Enable IP masquerading for bridge's IP range.
#
# @param socket_bind
# Which local unix socket to bind the docker service to.
#
# @param socket_group
# Which local unix socket to bind the docker service to.
#
# @param root_dir
# Specify a non-standard root directory for docker.
#
# @param extra_parameters
# Plain additional parameters to pass to the docker daemon
#
# @param shell_values
# Array of shell values to pass into init script config files
#
# @param manage_service
# Specify whether the service should be managed.
# Valid values are 'true', 'false'.
# Defaults to 'true'.
#
# @param docker_command
#
# @param docker_start_command
#
# @param service_name
#
# @param icc
#
# @param bridge
#
# @param fixed_cidr
#
# @param default_gateway
#
# @param ipv6
#
# @param ipv6_cidr
#
# @param default_gateway_ipv6
#
# @param log_level
#
# @param log_driver
#
# @param log_opt
#
# @param selinux_enabled
#
# @param labels
#
# @param dns
#
# @param dns_search
#
# @param service_state
#
# @param service_enable
#
# @param proxy
#
# @param no_proxy
#
# @param execdriver
#
# @param bip
#
# @param mtu
#
# @param storage_driver
#
# @param dm_basesize
#
# @param dm_fs
#
# @param dm_mkfsarg
#
# @param dm_mountopt
#
# @param dm_blocksize
#
# @param dm_loopdatasize
#
# @param dm_loopmetadatasize
#
# @param dm_datadev
#
# @param dm_metadatadev
#
# @param tmp_dir_config
#
# @param tmp_dir
#
# @param dm_thinpooldev
#
# @param dm_use_deferred_removal
#
# @param dm_use_deferred_deletion
#
# @param dm_blkdiscard
#
# @param dm_override_udev_sync_check
#
# @param overlay2_override_kernel_check
#
# @param storage_devs
#
# @param storage_vg
#
# @param storage_root_size
#
# @param storage_data_size
#
# @param storage_min_data_size
#
# @param storage_chunk_size
#
# @param storage_growpart
#
# @param storage_auto_extend_pool
#
# @param storage_pool_autoextend_threshold
#
# @param storage_pool_autoextend_percent
#
# @param storage_config
#
# @param storage_config_template
#
# @param storage_setup_file
#
# @param service_provider
#
# @param service_config
#
# @param service_config_template
#
# @param service_overrides_template
#
# @param socket_overrides_template
#
# @param socket_override
#
# @param service_after_override
#
# @param service_hasstatus
#
# @param service_hasrestart
#
# @param daemon_environment_files
#
# @param tls_enable
#
# @param tls_verify
#
# @param tls_cacert
#
# @param tls_cert
#
# @param tls_key
#
# @param registry_mirror
#
# @param root_dir_flag
#
class docker::service (
String $docker_command = $docker::docker_command,
String $docker_start_command = $docker::docker_start_command,
Optional[String] $service_name = $docker::service_name,
Optional[Variant[String,Array[String]]] $tcp_bind = $docker::tcp_bind,
Boolean $ip_forward = $docker::ip_forward,
Boolean $iptables = $docker::iptables,
Boolean $ip_masq = $docker::ip_masq,
Optional[Boolean] $icc = $docker::icc,
Optional[String] $bridge = $docker::bridge,
Optional[String] $fixed_cidr = $docker::fixed_cidr,
Optional[String] $default_gateway = $docker::default_gateway,
Optional[Boolean] $ipv6 = $docker::ipv6,
Optional[String] $ipv6_cidr = $docker::ipv6_cidr,
Optional[String] $default_gateway_ipv6 = $docker::default_gateway_ipv6,
String $socket_bind = $docker::socket_bind,
Optional[String] $log_level = $docker::log_level,
Optional[String] $log_driver = $docker::log_driver,
Array $log_opt = $docker::log_opt,
Optional[Boolean] $selinux_enabled = $docker::selinux_enabled,
Optional[Variant[String,Boolean]] $socket_group = $docker::socket_group,
Array $labels = $docker::labels,
Optional[Variant[String,Array]] $dns = $docker::dns,
Optional[Variant[String,Array]] $dns_search = $docker::dns_search,
String $service_state = $docker::service_state,
Boolean $service_enable = $docker::service_enable,
Boolean $manage_service = $docker::manage_service,
Optional[String] $root_dir = $docker::root_dir,
Optional[Variant[String,Array]] $extra_parameters = $docker::extra_parameters,
Optional[Variant[String,Array]] $shell_values = $docker::shell_values,
Optional[String] $proxy = $docker::proxy,
Optional[String] $no_proxy = $docker::no_proxy,
Optional[String] $execdriver = $docker::execdriver,
Optional[String] $bip = $docker::bip,
Optional[String] $mtu = $docker::mtu,
Optional[String] $storage_driver = $docker::storage_driver,
Optional[String] $dm_basesize = $docker::dm_basesize,
Optional[String] $dm_fs = $docker::dm_fs,
Optional[String] $dm_mkfsarg = $docker::dm_mkfsarg,
Optional[String] $dm_mountopt = $docker::dm_mountopt,
Optional[String] $dm_blocksize = $docker::dm_blocksize,
Optional[String] $dm_loopdatasize = $docker::dm_loopdatasize,
Optional[String] $dm_loopmetadatasize = $docker::dm_loopmetadatasize,
Optional[String] $dm_datadev = $docker::dm_datadev,
Optional[String] $dm_metadatadev = $docker::dm_metadatadev,
Optional[Boolean] $tmp_dir_config = $docker::tmp_dir_config,
Optional[String] $tmp_dir = $docker::tmp_dir,
Optional[String] $dm_thinpooldev = $docker::dm_thinpooldev,
Optional[Boolean] $dm_use_deferred_removal = $docker::dm_use_deferred_removal,
Optional[Boolean] $dm_use_deferred_deletion = $docker::dm_use_deferred_deletion,
Optional[Boolean] $dm_blkdiscard = $docker::dm_blkdiscard,
Optional[Boolean] $dm_override_udev_sync_check = $docker::dm_override_udev_sync_check,
Boolean $overlay2_override_kernel_check = $docker::overlay2_override_kernel_check,
Optional[String] $storage_devs = $docker::storage_devs,
Optional[String] $storage_vg = $docker::storage_vg,
Optional[String] $storage_root_size = $docker::storage_root_size,
Optional[String] $storage_data_size = $docker::storage_data_size,
Optional[String] $storage_min_data_size = $docker::storage_min_data_size,
Optional[String] $storage_chunk_size = $docker::storage_chunk_size,
Optional[Boolean] $storage_growpart = $docker::storage_growpart,
Optional[String] $storage_auto_extend_pool = $docker::storage_auto_extend_pool,
Optional[String] $storage_pool_autoextend_threshold = $docker::storage_pool_autoextend_threshold,
Optional[String] $storage_pool_autoextend_percent = $docker::storage_pool_autoextend_percent,
Optional[Variant[String,Boolean]] $storage_config = $docker::storage_config,
Optional[String] $storage_config_template = $docker::storage_config_template,
Optional[String] $storage_setup_file = $docker::storage_setup_file,
Optional[String] $service_provider = $docker::service_provider,
Optional[Variant[String,Boolean]] $service_config = $docker::service_config,
Optional[String] $service_config_template = $docker::service_config_template,
Optional[Variant[String,Boolean]] $service_overrides_template = $docker::service_overrides_template,
Optional[Variant[String,Boolean]] $socket_overrides_template = $docker::socket_overrides_template,
Optional[Boolean] $socket_override = $docker::socket_override,
Optional[Variant[String,Boolean]] $service_after_override = $docker::service_after_override,
Optional[Boolean] $service_hasstatus = $docker::service_hasstatus,
Optional[Boolean] $service_hasrestart = $docker::service_hasrestart,
Array $daemon_environment_files = $docker::daemon_environment_files,
Boolean $tls_enable = $docker::tls_enable,
Boolean $tls_verify = $docker::tls_verify,
Optional[String] $tls_cacert = $docker::tls_cacert,
Optional[String] $tls_cert = $docker::tls_cert,
Optional[String] $tls_key = $docker::tls_key,
Optional[Variant[String,Array]] $registry_mirror = $docker::registry_mirror,
String $root_dir_flag = $docker::root_dir_flag,
) {
unless $facts['os']['family'] =~ /(Debian|RedHat|windows)/ or $docker::acknowledge_unsupported_os {
fail('The docker::service class needs a Debian, Redhat or Windows based system.')
}
$dns_array = any2array($dns)
$dns_search_array = any2array($dns_search)
$labels_array = any2array($labels)
$extra_parameters_array = any2array($extra_parameters)
$shell_values_array = any2array($shell_values)
$tcp_bind_array = any2array($tcp_bind)
if $service_config != undef {
$_service_config = $service_config
} else {
if $facts['os']['family'] == 'Debian' {
$_service_config = "/etc/default/${service_name}"
} else {
$_service_config = undef
}
}
$_manage_service = $manage_service ? {
true => Service['docker'],
default => [],
}
$docker_storage_setup_parameters = {
'storage_driver' => $storage_driver,
'storage_devs' => $storage_devs,
'storage_vg' => $storage_vg,
'storage_root_size' => $storage_root_size,
'storage_data_size' => $storage_data_size,
'storage_min_data_size' => $storage_min_data_size,
'storage_chunk_size' => $storage_chunk_size,
'storage_growpart' => $storage_growpart,
'storage_auto_extend_pool' => $storage_auto_extend_pool,
'storage_pool_autoextend_threshold' => $storage_pool_autoextend_threshold,
'storage_pool_autoextend_percent' => $storage_pool_autoextend_percent,
}
if $facts['os']['family'] == 'RedHat' {
file { $storage_setup_file:
ensure => file,
force => true,
content => epp('docker/etc/sysconfig/docker-storage-setup.epp', $docker_storage_setup_parameters),
before => $_manage_service,
notify => $_manage_service,
}
}
if $facts['os']['family'] == 'windows' {
$dirs = [
"${facts['docker_program_data_path']}/docker/",
"${facts['docker_program_data_path']}/docker/config/",
]
$dirs.each |$dir| {
file { $dir:
ensure => directory,
}
}
}
$parameters_service_overrides_template = {
'service_after_override' => $service_after_override,
'docker_start_command' => $docker_start_command,
'daemon_environment_files' => $daemon_environment_files,
}
case $service_provider {
'systemd': {
file { '/etc/systemd/system/docker.service.d':
ensure => 'directory',
}
if $service_overrides_template {
file { '/etc/systemd/system/docker.service.d/service-overrides.conf':
ensure => file,
content => epp($service_overrides_template, $parameters_service_overrides_template),
seltype => 'container_unit_file_t',
notify => Exec['docker-systemd-reload-before-service'],
before => $_manage_service,
}
}
if $socket_override {
file { '/etc/systemd/system/docker.socket.d':
ensure => 'directory',
}
file { '/etc/systemd/system/docker.socket.d/socket-overrides.conf':
ensure => file,
content => epp($socket_overrides_template, { 'socket_group' => $socket_group }),
seltype => 'container_unit_file_t',
notify => Exec['docker-systemd-reload-before-service'],
before => $_manage_service,
}
}
exec { 'docker-systemd-reload-before-service':
path => ['/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/',],
command => 'systemctl daemon-reload > /dev/null',
notify => $_manage_service,
refreshonly => true,
}
}
'upstart': {
file { '/etc/init.d/docker':
ensure => 'link',
target => '/lib/init/upstart-job',
force => true,
notify => $_manage_service,
}
}
default: {}
}
#workaround for docker 1.13 on RedHat 7
if $facts['docker_server_version'] {
if $facts['os']['family'] == 'RedHat' and $facts['docker_server_version'] =~ /1\.13.+/ {
$_skip_storage_config = true
} else {
$_skip_storage_config = false
}
} else {
$_skip_storage_config = false
}
$storage_config_parameters = {
'storage_driver' => $storage_driver,
'storage_devs' => $storage_devs,
'storage_vg' => $storage_vg,
'storage_root_size' => $storage_root_size,
'storage_data_size' => $storage_data_size,
'storage_min_data_size' => $storage_min_data_size,
'storage_chunk_size' => $storage_chunk_size,
'storage_growpart' => $storage_growpart,
'storage_auto_extend_pool' => $storage_auto_extend_pool,
'storage_pool_autoextend_threshold' => $storage_pool_autoextend_threshold,
'storage_pool_autoextend_percent' => $storage_pool_autoextend_percent,
'dm_basesize' => $dm_basesize,
'dm_fs' => $dm_fs,
'dm_mkfsarg' => $dm_mkfsarg,
'dm_mountopt' => $dm_mountopt,
'dm_blocksize' => $dm_blocksize,
'dm_loopdatasize' => $dm_loopdatasize,
'dm_loopmetadatasize' => $dm_loopmetadatasize,
'dm_thinpooldev' => $dm_thinpooldev,
'dm_datadev' => $dm_datadev,
'dm_metadatadev' => $dm_metadatadev,
'dm_use_deferred_removal' => $dm_use_deferred_removal,
'dm_use_deferred_deletion' => $dm_use_deferred_deletion,
'dm_blkdiscard' => $dm_blkdiscard,
'dm_override_udev_sync_check' => $dm_override_udev_sync_check,
'overlay2_override_kernel_check' => $overlay2_override_kernel_check,
}
if $storage_config {
unless $_skip_storage_config {
file { $storage_config:
ensure => file,
force => true, #force rewrite storage configuration
content => epp($storage_config_template, $storage_config_parameters),
notify => $_manage_service,
}
}
}
$parameters = {
'docker_command' => $docker_command,
'proxy' => $proxy,
'no_proxy' => $no_proxy,
'tmp_dir' => $tmp_dir,
'root_dir' => $root_dir,
'root_dir_flag' => $root_dir_flag,
'tcp_bind' => $tcp_bind,
'tcp_bind_array' => $tcp_bind_array,
'tls_enable' => $tls_enable,
'tls_verify' => $tls_verify,
'tls_cacert' => $tls_cacert,
'tls_cert' => $tls_cert,
'tls_key' => $tls_key,
'socket_bind' => $socket_bind,
'ip_forward' => $ip_forward,
'iptables' => $iptables,
'ip_masq' => $ip_masq,
'icc' => $icc,
'fixed_cidr' => $fixed_cidr,
'bridge' => $bridge,
'default_gateway' => $default_gateway,
'log_level' => $log_level,
'log_driver' => $log_driver,
'log_opt' => $log_opt,
'selinux_enabled' => $selinux_enabled,
'socket_group' => $socket_group,
'dns' => $dns,
'dns_array' => $dns_array,
'dns_search' => $dns_search,
'dns_search_array' => $dns_search_array,
'execdriver' => $execdriver,
'bip' => $bip,
'mtu' => $mtu,
'registry_mirror' => $registry_mirror,
'storage_driver' => $storage_driver,
'dm_basesize' => $dm_basesize,
'dm_fs' => $dm_fs,
'dm_mkfsarg' => $dm_mkfsarg,
'dm_mountopt' => $dm_mountopt,
'dm_blocksize' => $dm_blocksize,
'dm_loopdatasize' => $dm_loopdatasize,
'dm_loopmetadatasize' => $dm_loopmetadatasize,
'dm_thinpooldev' => $dm_thinpooldev,
'dm_datadev' => $dm_datadev,
'dm_metadatadev' => $dm_metadatadev,
'dm_use_deferred_removal' => $dm_use_deferred_removal,
'dm_use_deferred_deletion' => $dm_use_deferred_deletion,
'dm_blkdiscard' => $dm_blkdiscard,
'dm_override_udev_sync_check' => $dm_override_udev_sync_check,
'overlay2_override_kernel_check' => $overlay2_override_kernel_check,
'labels' => $labels,
'extra_parameters' => $extra_parameters,
'extra_parameters_array' => $extra_parameters_array,
'shell_values' => $shell_values,
'shell_values_array' => $shell_values_array,
'labels_array' => $labels_array,
'ipv6' => $ipv6,
'ipv6_cidr' => $ipv6_cidr,
'default_gateway_ipv6' => $default_gateway_ipv6,
'tmp_dir_config' => $tmp_dir_config,
}
if $_service_config {
file { $_service_config:
ensure => file,
force => true,
content => epp($service_config_template, $parameters),
notify => $_manage_service,
}
}
if $manage_service {
if $facts['os']['family'] == 'windows' {
reboot { 'pending_reboot':
when => 'pending',
onlyif => 'component_based_servicing',
timeout => 1,
}
}
if ! defined(Service['docker']) {
service { 'docker':
ensure => $service_state,
name => $service_name,
enable => $service_enable,
hasstatus => $service_hasstatus,
hasrestart => $service_hasrestart,
provider => $service_provider,
}
}
}
}

View file

@ -0,0 +1,205 @@
# @summary define that managers a Docker services
#
# @param ensure
# This ensures that the service is present or not.
#
# @param image
# The Docker image to spwan the service from.
#
# @param detach
# Exit immediately instead of waiting for the service to converge (default true)
#
# @param env
# Set environment variables
#
# @param label
# Service labels.
# This used as metdata to configure constraints etc.
#
# @param publish
# Publish port(s) as node ports.
#
# @param replicas
# Number of tasks (containers per service)
#
# @param tty
# Allocate a pseudo-TTY
#
# @param user
# Username or UID (format: <name|uid>[:<group|gid>])
#
# @param workdir
# Working directory inside the container
#
# @param extra_params
# Allows you to pass any other flag that the Docker service create supports.
# This must be passed as an array. See docker service create --help for all options
#
# @param update
# This changes the docker command to
# docker service update, you must pass a service name with this option
#
# @param scale
# This changes the docker command to
# docker service scale, this can only be used with service name and
# replicas
#
# @param host_socket
# This will allow the service to connect to the host linux socket.
#
# @param registry_mirror
# This will allow the service to set a registry mirror.
#
# @param mounts
# Allows attaching filesystem mounts to the service (specified as an array)
#
# @param networks
# Allows attaching the service to networks (specified as an array)
#
# @param command
# Command to run on the container
#
# @param create
#
# @param service_name
#
define docker::services (
Enum[present,absent] $ensure = 'present',
Boolean $create = true,
Boolean $update = false,
Boolean $scale = false,
Boolean $detach = true,
Boolean $tty = false,
Array $env = [],
Array $label = [],
Array $extra_params = [],
Optional[Variant[String,Array]] $image = undef,
Optional[Variant[String,Array]] $service_name = undef,
Optional[Variant[String,Array]] $publish = undef,
Optional[Variant[String,Array]] $replicas = undef,
Optional[Variant[String,Array]] $user = undef,
Optional[Variant[String,Array]] $workdir = undef,
Optional[Variant[String,Array]] $host_socket = undef,
Optional[Variant[String,Array]] $registry_mirror = undef,
Optional[Variant[String,Array]] $mounts = undef,
Optional[Array] $networks = undef,
Optional[Variant[String,Array]] $command = undef,
) {
include docker::params
$docker_command = "${docker::params::docker_command} service"
if $ensure == 'absent' {
if $update {
fail('When removing a service you can not update it.')
}
if $scale {
fail('When removing a service you can not update it.')
}
}
if $facts['os']['family'] == 'windows' {
$exec_environment = "PATH=${facts['docker_program_files_path']}/Docker/;${facts['docker_systemroot']}/System32/"
$exec_path = ["${facts['docker_program_files_path']}/Docker/",]
$exec_provider = 'powershell'
$exec_timeout = 3000
} else {
$exec_environment = 'HOME=/root'
$exec_path = ['/bin', '/usr/bin',]
$exec_provider = undef
$exec_timeout = 0
}
if $create {
$docker_service_create_flags = docker_service_flags({
detach => $detach,
env => any2array($env),
service_name => $service_name,
label => any2array($label),
publish => $publish,
replicas => $replicas,
tty => $tty,
user => $user,
workdir => $workdir,
extra_params => any2array($extra_params),
image => $image,
host_socket => $host_socket,
registry_mirror => $registry_mirror,
mounts => $mounts,
networks => $networks,
command => $command,
}
)
$exec_create = "${docker_command} create --name ${docker_service_create_flags}"
$unless_create = "docker service ps ${service_name}"
exec { "${title} docker service create":
command => $exec_create,
environment => $exec_environment,
path => $exec_path,
timeout => $exec_timeout,
provider => $exec_provider,
unless => $unless_create,
}
}
if $update {
$docker_service_flags = docker_service_flags({
detach => $detach,
env => any2array($env),
service_name => $service_name,
label => any2array($label),
publish => $publish,
replicas => $replicas,
tty => $tty,
user => $user,
workdir => $workdir,
extra_params => any2array($extra_params),
image => $image,
host_socket => $host_socket,
registry_mirror => $registry_mirror,
}
)
$exec_update = "${docker_command} update ${docker_service_flags}"
exec { "${title} docker service update":
command => $exec_update,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
}
if $scale {
$docker_service_flags = docker_service_flags({
service_name => $service_name,
replicas => $replicas,
extra_params => any2array($extra_params),
}
)
$exec_scale = "${docker_command} scale ${service_name}=${replicas}"
exec { "${title} docker service scale":
command => $exec_scale,
environment => $exec_environment,
path => $exec_path,
timeout => $exec_timeout,
provider => $exec_provider,
}
}
if $ensure == 'absent' {
exec { "${title} docker service remove":
command => "docker service rm ${service_name}",
onlyif => "docker service ps ${service_name}",
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
}
}
}

View file

@ -0,0 +1,80 @@
# @summary
# deploys Docker stacks or compose v3
#
# @param ensure
# This ensures that the stack is present or not.
#
# @param stack_name
# The name of the stack that you are deploying
#
# @param bundle_file
# Path to a Distributed Application Bundle file
# Please note this is experimental
#
# @param prune
# Prune services that are no longer referenced
#
# @param resolve_image
# Query the registry to resolve image digest and supported platforms
# Only accepts ("always"|"changed"|"never")
#
# @param with_registry_auth
# Send registry authentication details to Swarm agents
#
# @param compose_files
define docker::stack (
Enum[present,absent] $ensure = 'present',
Optional[String] $stack_name = undef,
Optional[String] $bundle_file = undef,
Optional[Array] $compose_files = undef,
Boolean $prune = false,
Boolean $with_registry_auth = false,
Optional[Enum['always','changed','never']] $resolve_image = undef,
) {
include docker::params
deprecation('docker::stack','The docker stack define type will be deprecated in a future release. Please migrate to the docker_stack type/provider.')
$docker_command = "${docker::params::docker_command} stack"
if $facts['os']['family'] == 'windows' {
$exec_path = ['C:/Program Files/Docker/',]
$check_stack = '$info = docker stack ls | select-string -pattern web
if ($info -eq $null) { Exit 1 } else { Exit 0 }'
$provider = 'powershell'
} else {
$exec_path = ['/bin', '/usr/bin',]
$check_stack = "${docker_command} ls | grep '${stack_name}'"
$provider = undef
}
if $ensure == 'present' {
$docker_stack_flags = docker_stack_flags ({
stack_name => $stack_name,
bundle_file => $bundle_file,
compose_files => $compose_files,
prune => $prune,
with_registry_auth => $with_registry_auth,
resolve_image => $resolve_image,
}
)
$exec_stack = "${docker_command} deploy ${docker_stack_flags} ${stack_name}"
exec { "docker stack create ${stack_name}":
command => $exec_stack,
unless => $check_stack,
path => $exec_path,
provider => $provider,
}
}
if $ensure == 'absent' {
exec { "docker stack destroy ${stack_name}":
command => "${docker_command} rm ${stack_name}",
onlyif => $check_stack,
path => $exec_path,
provider => $provider,
}
}
}

View file

@ -0,0 +1,161 @@
# @summary
# managers a Docker Swarm Mode cluster
#
# @param ensure
# This ensures that the cluster is present or not.
# Note this forcefully removes a node from the cluster. Make sure all worker nodes
# have been removed before managers
#
# @param init
# This creates the first worker node for a new cluster.
# Set init to true to create a new cluster
#
# @param join
# This adds either a worker or manger node to the cluster.
# The role of the node is defined by the join token.
# Set to true to join the cluster
#
# @param advertise_addr
# The address that your node will advertise to the cluster for raft.
# On multihomed servers this flag must be passed
#
# @param autolock
# Enable manager autolocking (requiring an unlock key to start a stopped manager)
#
# @param cert_expiry
# Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
#
# @param default_addr_pool
# Array of default subnet pools for global scope networks (['30.30.0.0/16','40.40.0.0/16'])
#
# @param default_addr_pool_mask_length
# Default subnet pools mask length for default-addr-pools (CIDR block number)
#
# @param dispatcher_heartbeat
# Dispatcher heartbeat period (ns|us|ms|s|m|h) (default 5s)
#
# @param external_ca
# Specifications of one or more certificate signing endpoints
#
# @param force_new_cluster
# Force create a new cluster from current state
#
# @param listen_addr
# The address that your node will listen to the cluster for raft.
# On multihomed servers this flag must be passed
#
# @param max_snapshots
# Number of additional Raft snapshots to retain
#
# @param snapshot_interval
# Number of log entries between Raft snapshots (default 10000)
#
# @param token
# The authentication token to join the cluster. The token also defines the type of
# node (worker or manager)
#
# @param manager_ip
# The ip address of a manager node to join the cluster.
#
define docker::swarm (
Enum[present,absent] $ensure = 'present',
Boolean $init = false,
Boolean $join = false,
Optional[String] $advertise_addr = undef,
Boolean $autolock = false,
Optional[String] $cert_expiry = undef,
Optional[Array] $default_addr_pool = undef,
Optional[String] $default_addr_pool_mask_length = undef,
Optional[String] $dispatcher_heartbeat = undef,
Optional[String] $external_ca = undef,
Boolean $force_new_cluster = false,
Optional[String] $listen_addr = undef,
Optional[String] $max_snapshots = undef,
Optional[String] $snapshot_interval = undef,
Optional[String] $token = undef,
Optional[String] $manager_ip = undef,
) {
include docker::params
if $facts['os']['family'] == 'windows' {
$exec_environment = "PATH=${facts['docker_program_files_path']}/Docker/"
$exec_path = ["${facts['docker_program_files_path']}/Docker/",]
$exec_timeout = 3000
$exec_provider = 'powershell'
$unless_init = '$info = docker info | select-string -pattern "Swarm: active"
if ($info -eq $null) { Exit 1 } else { Exit 0 }'
$unless_join = '$info = docker info | select-string -pattern "Swarm: active"
if ($info -eq $null) { Exit 1 } else { Exit 0 }'
$onlyif_leave = '$info = docker info | select-string -pattern "Swarm: active"
if ($info -eq $null) { Exit 1 } else { Exit 0 }'
} else {
$exec_environment = 'HOME=/root'
$exec_path = ['/bin', '/usr/bin',]
$exec_timeout = 0
$exec_provider = undef
$unless_init = 'docker info | grep -w "Swarm: active"'
$unless_join = 'docker info | grep -w "Swarm: active"'
$onlyif_leave = 'docker info | grep -w "Swarm: active"'
}
$docker_command = "${docker::params::docker_command} swarm"
if $init {
$docker_swarm_init_flags = docker_swarm_init_flags({
init => $init,
advertise_addr => $advertise_addr,
autolock => $autolock,
cert_expiry => $cert_expiry,
dispatcher_heartbeat => $dispatcher_heartbeat,
default_addr_pool => $default_addr_pool,
default_addr_pool_mask_length => $default_addr_pool_mask_length,
external_ca => $external_ca,
force_new_cluster => $force_new_cluster,
listen_addr => $listen_addr,
max_snapshots => $max_snapshots,
snapshot_interval => $snapshot_interval,
}
)
$exec_init = "${docker_command} ${docker_swarm_init_flags}"
exec { 'Swarm init':
command => $exec_init,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
unless => $unless_init,
}
}
if $join {
$docker_swarm_join_flags = docker_swarm_join_flags({
join => $join,
advertise_addr => $advertise_addr,
listen_addr => $listen_addr,
token => $token,
}
)
$exec_join = "${docker_command} ${docker_swarm_join_flags} ${manager_ip}"
exec { 'Swarm join':
command => $exec_join,
environment => $exec_environment,
path => $exec_path,
provider => $exec_provider,
timeout => $exec_timeout,
unless => $unless_join,
}
}
if $ensure == 'absent' {
exec { 'Leave swarm':
command => 'docker swarm leave --force',
onlyif => $onlyif_leave,
path => $exec_path,
provider => $exec_provider,
}
}
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param swarms
#
class docker::swarms (
Hash $swarms
) {
create_resources(docker::swarm, $swarms)
}

View file

@ -0,0 +1,23 @@
# @summary manage docker group users
#
# @param create_user
# Boolean to cotrol whether the user should be created
#
define docker::system_user (
Boolean $create_user = true
) {
include docker
$docker_group = $docker::docker_group
if $create_user {
ensure_resource('user', $name, { 'ensure' => 'present' })
User[$name] -> Exec["docker-system-user-${name}"]
}
exec { "docker-system-user-${name}":
command => "/usr/sbin/usermod -aG ${docker_group} ${name}",
unless => "/bin/cat /etc/group | grep '^${docker_group}:' | grep -qw ${name}",
}
}

View file

@ -0,0 +1,10 @@
# @summary
# For systems that have systemd
#
class docker::systemd_reload {
exec { 'docker-systemd-reload':
path => ['/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/',],
command => 'systemctl daemon-reload',
refreshonly => true,
}
}

View file

@ -0,0 +1,9 @@
# @summary
#
# @param volumes
#
class docker::volumes (
Hash $volumes
) {
create_resources(docker_volume, $volumes)
}

View file

@ -0,0 +1,7 @@
# @summary
# Windows account that owns the docker services
#
define docker::windows_account (
) {
notice('Not implemented')
}

View file

@ -0,0 +1,70 @@
{
"name": "puppetlabs-docker",
"version": "10.2.0",
"author": "puppetlabs",
"summary": "Module for installing and managing docker",
"license": "Apache-2.0",
"source": "https://github.com/puppetlabs/puppetlabs-docker",
"project_page": "https://github.com/puppetlabs/puppetlabs-docker",
"issues_url": "https://github.com/puppetlabs/puppetlabs-docker/issues",
"dependencies": [
{
"name": "puppetlabs/stdlib",
"version_requirement": ">= 9.0.0 < 10.0.0"
},
{
"name": "puppetlabs/apt",
"version_requirement": ">= 4.4.1 < 10.0.0"
},
{
"name": "puppetlabs/powershell",
"version_requirement": ">= 2.1.4 < 7.0.0"
},
{
"name": "puppetlabs/reboot",
"version_requirement": ">=2.0.0 < 6.0.0"
}
],
"operatingsystem_support": [
{
"operatingsystem": "CentOS",
"operatingsystemrelease": [
"7",
"8",
"9"
]
},
{
"operatingsystem": "Ubuntu",
"operatingsystemrelease": [
"18.04",
"20.04",
"22.04"
]
},
{
"operatingsystem": "Debian",
"operatingsystemrelease": [
"10",
"11"
]
},
{
"operatingsystem": "Windows",
"operatingsystemrelease": [
"2016",
"2019",
"2022"
]
}
],
"requirements": [
{
"name": "puppet",
"version_requirement": ">= 7.0.0 < 9.0.0"
}
],
"pdk-version": "3.0.0",
"template-url": "https://github.com/puppetlabs/pdk-templates.git#main",
"template-ref": "heads/main-0-g79a2f93"
}

View file

@ -0,0 +1,2 @@
---
ignore: []

View file

@ -0,0 +1,40 @@
---
default:
provisioner: vagrant
images:
- ubuntu/xenial64
- ubuntu/bionic64
centos:
provisioner: vagrant
images:
- centos/7
debian:
provisioner: vagrant
images:
- debian/stretch64
- debian/buster64
win:
provisioner: vagrant
images:
- gusztavvargadr/windows-server
release_checks_6:
provisioner: abs
images:
- redhat-7-x86_64
- centos-7-x86_64
- ubuntu-1604-x86_64
- ubuntu-1804-x86_64
- ubuntu-2004-x86_64
- debian-9-x86_64
- debian-10-x86_64
- win-2016-x86_64
- win-2019-x86_64
release_checks_7:
provisioner: abs
images:
- centos-7-x86_64
- ubuntu-1804-x86_64
- ubuntu-2004-x86_64
- debian-10-x86_64
- win-2016-x86_64
- win-2019-x86_64

View file

@ -0,0 +1,14 @@
{
"description": "List nodes in the swarm",
"input_method": "stdin",
"parameters": {
"filter": {
"description": "Filter output based on conditions provided",
"type": "Optional[String[1]]"
},
"quiet": {
"description": "Only display IDs",
"type": "Optional[Boolean]"
}
}
}

View file

@ -0,0 +1,30 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def node_ls(filter, quiet)
cmd_string = 'docker node ls'
cmd_string += " --filter=#{filter}" unless filter.nil?
cmd_string += ' --quiet' unless quiet.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
filter = params['filter']
quiet = params['quiet']
begin
result = node_ls(filter, quiet)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,14 @@
{
"description": "Update a node",
"input_method": "stdin",
"parameters": {
"force": {
"description": "Force remove a node from the swarm",
"type": "Optional[Boolean]"
},
"node": {
"description": "Hostname or ID of the node in the swarm",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,30 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def node_rm(force, node)
cmd_string = 'docker node rm'
cmd_string += ' --force' unless force.nil?
cmd_string += " #{node}" unless node.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
force = params['force']
node = params['node']
begin
result = node_rm(force, node)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,27 @@
{
"description": "Update a node",
"input_method": "stdin",
"parameters": {
"availability": {
"description": "Availability of the node",
"type": "Optional[Enum['active', 'pause', 'drain']]"
},
"role": {
"description": "Role of the node",
"type": "Optional[Enum['manager', 'worker']]"
},
"label_add": {
"description": "Add or update a node label (key=value)",
"type": "Optional[Array]"
},
"label_rm": {
"description": "Remove a node label if exists.",
"type": "Optional[Array]"
},
"node": {
"description": "ID of the node in the swarm",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,47 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def node_update(availability, role, label_add, label_rm, node)
cmd_string = 'docker node update'
cmd_string += " --availability #{availability}" unless availability.nil?
cmd_string += " --role #{role}" unless role.nil?
if label_add.is_a? Array
label_add.each do |param|
cmd_string += " --label-add #{param}"
end
end
if label_rm.is_a? Array
label_rm.each do |param|
cmd_string += " --label-rm #{param}"
end
end
cmd_string += " #{node}" unless node.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
availability = params['availability']
role = params['role']
label_add = params['label_add']
label_rm = params['label_rm']
node = params['node']
begin
result = node_update(availability, role, label_add, label_rm, node)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,38 @@
{
"description": "Create a new Docker service",
"input_method": "stdin",
"parameters": {
"service": {
"description": "The name of the service to create",
"type": "String[1]"
},
"image": {
"description": "The new image to use for the service",
"type": "String[1]"
},
"replicas": {
"description": "Number of replicas",
"type": "Integer"
},
"expose": {
"description": "Publish service ports externally to the swarm",
"type": "Variant[String,Array,Undef]"
},
"env": {
"description": "Set environment variables",
"type": "Optional[Hash]"
},
"command": {
"description": "Command to run on the container",
"type": "Variant[String,Array,Undef]"
},
"extra_params": {
"description": "Allows you to pass any other flag that the Docker service create supports.",
"type": "Optional[Array]"
},
"detach": {
"description": "Exit immediately instead of waiting for the service to converge",
"type": "Optional[Boolean]"
}
}
}

View file

@ -0,0 +1,56 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def service_create(image, replicas, expose, env, command, extra_params, service, detach)
cmd_string = 'docker service create'
if extra_params.is_a? Array
extra_params.each do |param|
cmd_string += " #{param}"
end
end
cmd_string += " --name #{service}" unless service.nil?
cmd_string += " --replicas #{replicas}" unless replicas.nil?
cmd_string += " --publish #{expose}" unless expose.nil?
if env.is_a? Hash
env.each do |key, value|
cmd_string += " --env #{key}='#{value}'"
end
end
if command.is_a? Array
cmd_string += command.join(' ')
elsif command && command.to_s != 'undef'
cmd_string += command.to_s
end
cmd_string += ' -d' unless detach.nil?
cmd_string += " #{image}" unless image.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
image = params['image']
replicas = params['replicas']
expose = params['expose']
env = params['env']
command = params['command']
extra_params = params['extra_params']
service = params['service']
detach = params['detach']
begin
result = service_create(image, replicas, expose, env, command, extra_params, service, detach)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,10 @@
{
"description": "Remove one replicated service",
"input_method": "stdin",
"parameters": {
"service": {
"description": "Name or ID of the service",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,28 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def service_rm(service)
cmd_string = 'docker service rm'
cmd_string += " #{service}" unless service.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
service = params['service']
begin
result = service_rm(service)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,18 @@
{
"description": "Scale one replicated service",
"input_method": "stdin",
"parameters": {
"service": {
"description": "Name or ID of the service",
"type": "String[1]"
},
"scale": {
"description": "Number of replicas",
"type": "Integer"
},
"detach": {
"description": "Exit immediately instead of waiting for the service to converge",
"type": "Optional[Boolean]"
}
}
}

View file

@ -0,0 +1,32 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def service_scale(service, scale, detach)
cmd_string = 'docker service scale'
cmd_string += " #{service}" unless service.nil?
cmd_string += "=#{scale}" unless scale.nil?
cmd_string += ' -d' unless detach.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
service = params['service']
scale = params['scale']
detach = params['detach']
begin
result = service_scale(service, scale, detach)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,22 @@
{
"description": "Updates an existing service.",
"input_method": "stdin",
"parameters": {
"service": {
"description": "The service to update",
"type": "String[1]"
},
"image": {
"description": "The new image to use for the service",
"type": "String[1]"
},
"constraint_add": {
"description": "Add or update a service constraint (selector==value, selector!=value)",
"type": "Optional[Array]"
},
"constraint_rm": {
"description": "Remove a service constraint if exists.",
"type": "Optional[Array]"
}
}
}

View file

@ -0,0 +1,45 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def service_update(image, service, constraint_add, constraint_rm)
cmd_string = 'docker service update'
cmd_string += " --image #{image}" unless image.nil?
if constraint_add.is_a? Array
constraint_add.each do |param|
cmd_string += " --constraint-add #{param}"
end
end
if constraint_rm.is_a? Array
constraint_rm.each do |param|
cmd_string += " --constraint-rm #{param}"
end
end
cmd_string += " #{service}" unless service.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
image = params['image']
service = params['service']
constraint_add = params['constraint_add']
constraint_rm = params['constraint_rm']
begin
result = service_update(image, service, constraint_add, constraint_rm)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,42 @@
{
"description": "Initializes a swarm",
"input_method": "stdin",
"parameters": {
"advertise_addr": {
"description": "Advertised address",
"type": "Optional[String[1]]"
},
"autolock": {
"description": "Enable manager autolocking",
"type": "Optional[Boolean]"
},
"cert_expiry": {
"description": "Validity period for node certificates",
"type": "Optional[String[1]]"
},
"dispatcher_heartbeat": {
"description": "Dispatcher heartbeat period",
"type": "Optional[String[1]]"
},
"external_ca": {
"description": "Specifications of one or more certificate signing endpoints",
"type": "Optional[String[1]]"
},
"force_new_cluster": {
"description": "Force create a new cluster from current state",
"type": "Optional[Boolean]"
},
"listen_addr": {
"description": "Listen address",
"type": "Optional[String[1]]"
},
"max_snapshots": {
"description": "Number of additional Raft snapshots to retain",
"type": "Optional[Integer[1]]"
},
"snapshot_interval": {
"description": "Number of log entries between Raft snapshots",
"type": "Optional[Integer[1]]"
}
}
}

View file

@ -0,0 +1,44 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def swarm_init(advertise_addr, autolock, cert_expiry, dispatcher_heartbeat, external_ca, force_new_cluster, listen_addr, max_snapshots, snapshot_interval)
cmd_string = 'docker swarm init'
cmd_string += " --advertise-addr=#{advertise_addr}" unless advertise_addr.nil?
cmd_string += ' --autolock' unless autolock.nil?
cmd_string += ' --cert-expiry' unless cert_expiry.nil?
cmd_string += " --dispatcher-heartbeat=#{dispatcher_heartbeat}" unless dispatcher_heartbeat.nil?
cmd_string += " --external-ca=#{external_ca}" unless external_ca.nil?
cmd_string += ' --force-new-cluster' unless force_new_cluster.nil?
cmd_string += " --listen-addr=#{listen_addr}" unless listen_addr.nil?
cmd_string += " --max-snapshots=#{max_snapshots}" unless max_snapshots.nil?
cmd_string += " --snapshot-interval=#{snapshot_interval}" unless snapshot_interval.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
advertise_addr = params['advertise_addr']
autolock = params['autolock']
cert_expiry = params['cert_expiry']
dispatcher_heartbeat = params['dispatcher_heartbeat']
external_ca = params['external_ca']
force_new_cluster = params['force_new_cluster']
listen_addr = params['listen_addr']
max_snapshots = params['max_snapshots']
snapshot_interval = params['snapshot_interval']
begin
result = swarm_init(advertise_addr, autolock, cert_expiry, dispatcher_heartbeat, external_ca, force_new_cluster, listen_addr, max_snapshots, snapshot_interval)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,22 @@
{
"description": "Join a swarm",
"input_method": "stdin",
"parameters": {
"advertise_addr": {
"description": "Advertised address",
"type": "Optional[String[1]]"
},
"listen_addr": {
"description": "Listen address",
"type": "Optional[String[1]]"
},
"token": {
"description": "Join token for the swarm",
"type": "String[1]"
},
"manager_ip": {
"description": "IP Address of the swarm manager",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,34 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def swarm_join(advertise_addr, listen_addr, token, manager_ip)
cmd_string = 'docker swarm join'
cmd_string += " --advertise-addr=#{advertise_addr}" unless advertise_addr.nil?
cmd_string += " --listen-addr=#{listen_addr}" unless listen_addr.nil?
cmd_string += " --token=#{token}" unless token.nil?
cmd_string += " #{manager_ip}" unless manager_ip.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
advertise_addr = params['advertise_addr']
listen_addr = params['listen_addr']
token = params['token']
manager_ip = params['manager_ip']
begin
result = swarm_join(advertise_addr, listen_addr, token, manager_ip)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,10 @@
{
"description": "Leave a swarm",
"input_method": "stdin",
"parameters": {
"force": {
"description": "Force this node to leave the swarm, ignoring warnings",
"type": "Optional[Boolean]"
}
}
}

View file

@ -0,0 +1,26 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def swarm_leave(force)
cmd_string = 'docker swarm leave '
cmd_string += ' -f' if force == 'true'
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
force = params['force']
begin
result = swarm_leave(force)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,10 @@
{
"description": "Gets the swarm token from the server",
"input_method": "stdin",
"parameters": {
"node_role": {
"description": "The role of the node joining the swarm",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,28 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
def swarm_token(node_role)
cmd_string = 'docker swarm join-token -q'
cmd_string += " #{node_role}" unless node_role.nil?
stdout, stderr, status = Open3.capture3(cmd_string)
raise Puppet::Error, "stderr: '#{stderr}'" if status != 0
stdout.strip
end
params = JSON.parse($stdin.read)
node_role = params['node_role']
begin
result = swarm_token(node_role)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,15 @@
{
"description": "Updates an existing service.",
"input_method": "stdin",
"parameters": {
"service": {
"description": "The service to update",
"type": "String[1]"
},
"image": {
"description": "The new image to use for the service",
"type": "String[1]"
}
}
}

View file

@ -0,0 +1,20 @@
#!/opt/puppetlabs/puppet/bin/ruby
# frozen_string_literal: true
require 'json'
require 'open3'
require 'puppet'
params = JSON.parse($stdin.read)
image = params['image']
service = params['service']
begin
puts 'Deprecated: use docker::service_update instead'
result = service_update(image, service)
puts result
exit 0
rescue Puppet::Error => e
puts(status: 'failure', error: e.message)
exit 1
end

View file

@ -0,0 +1,37 @@
<% if $before_start { -%>
<%= $before_start %>
<% } -%>
<% if $remove_container_on_start { -%>
/usr/bin/<%= $docker_command %> rm <% if $remove_volume_on_start { %>-v<% } %> <%= $sanitised_title %> >/dev/null 2>&1
<% } -%>
<% if $pull_on_start { -%>
/usr/bin/<%= $docker_command %> pull <%= $image %>
<% } -%>
<% if $verify_digest { -%>
digest_local=$(docker image inspect <%= $image %> --format='{{index .RepoDigests 0}}')
digest_verify="<%= $verify_digest %>"
if [ "${digest_local##*:}" != "${digest_verify##*:}" ]; then
echo "Digest verify failed! Expected checksum 'sha256:$digest_verify' does not match with local checksum 'sha256:$digest_local'!"
exit 2
fi
<% } -%>
/usr/bin/<%= $docker_command %> create \
<%= $docker_run_flags %> \
--name <%= $sanitised_title %> \
<%= $image %> <% if $command { %> \
<%= $command %><% } %>
<% if $after_create { %><%= $after_create %><% } %>
<% if String(type($net, 'generalized')).index('Array') == 0 { %>
<% $net.each |$n| { %>
/usr/bin/<%= $docker_command %> network connect <%= $n %> <%= $sanitised_title %>
<% } %>
<% } %>
/usr/bin/<%= $docker_command %> start <% if ! $valid_detach { %>-a<% } %> <%= $sanitised_title %>
<% if $after_start { -%>
<%= $after_start %>
<% } -%>

View file

@ -0,0 +1,10 @@
<% if $before_stop { -%>
<%= $before_stop %>
<% } -%>
/usr/bin/<%= $docker_command %> stop --time=<%= $stop_wait_time %> <%= $sanitised_title %>
<% if $remove_container_on_stop { -%>
/usr/bin/<%= $docker_command %> rm <% if $remove_volume_on_stop { %>-v<% } %> <%= $sanitised_title %>
<% } -%>
<% if $after_stop { -%>
<%= $after_stop %>
<% } -%>

View file

@ -0,0 +1,56 @@
# This file is managed by Puppet and local changes
# may be overwritten
DOCKER="/usr/bin/<%= $docker_start_command %>"
other_args="<% -%>
<% if $root_dir { %><%= $root_dir_flag %> <%= $root_dir %><% } -%>
<% if $tcp_bind { %><% $tcp_bind_array.each |$param| { %> -H <%= $param %><% } %><% } -%>
<% if $tls_enable { %> --tls<% if $tls_verify { %> --tlsverify<% } %> --tlscacert=<%= $tls_cacert %> --tlscert=<%= $tls_cert %> --tlskey=<%= $tls_key %><% } -%>
<% if $socket_bind { %> -H <%= $socket_bind %><% } -%>
--ip-forward=<%= $ip_forward -%>
--iptables=<%= $iptables -%>
--ip-masq=<%= $ip_masq -%>
<% if $icc { %> --icc=<%= $icc %><% } -%>
<% if $fixed_cidr { %> --fixed-cidr <%= $fixed_cidr %><% } -%>
<% if $default_gateway { %> --default-gateway <%= $default_gateway %><% } -%>
<% if $bridge { %> --bridge <%= $bridge %><% } -%>
<% if $log_level { %> -l <%= $log_level %><% } -%>
<% if $log_driver { %> --log-driver <%= $log_driver %><% } -%>
<% if $log_driver { %><% if $log_opt { %><% $log_opt.each |$param| { %> --log-opt <%= $param %><% } %><% } -%><% } -%>
<% if $selinux_enabled { %> --selinux-enabled=<%= $selinux_enabled %><% } -%>
<% if $socket_group { %> -G <%= $socket_group %><% } -%>
<% if $dns { %><% $dns_array.each |$address| { %> --dns <%= $address %><% } %><% } -%>
<% if $dns_search { %><% $dns_search_array.each |$domain| { %> --dns-search <%= $domain %><% } %><% } -%>
<% if $execdriver { %> -e <%= $execdriver %><% } -%>
<% if $storage_driver { %> --storage-driver=<%= $storage_driver %><% } -%>
<% if $storage_driver == 'devicemapper' { -%>
<%- if $dm_basesize { %> --storage-opt dm.basesize=<%= $dm_basesize %><% } -%>
<%- if $dm_fs { %> --storage-opt dm.fs=<%= $dm_fs %><% } -%>
<%- if $dm_mkfsarg { %> --storage-opt "dm.mkfsarg=<%= $dm_mkfsarg %>"<% } -%>
<%- if $dm_mountopt { %> --storage-opt dm.mountopt=<%= $dm_mountopt %><% } -%>
<%- if $dm_blocksize { %> --storage-opt dm.blocksize=<%= $dm_blocksize %><% } -%>
<%- if $dm_loopdatasize { %> --storage-opt dm.loopdatasize=<%= $dm_loopdatasize %><% } -%>
<%- if $dm_loopmetadatasize { %> --storage-opt dm.loopmetadatasize=<%= $dm_loopmetadatasize %><% } -%>
<%- if $dm_thinpooldev { %> --storage-opt dm.thinpooldev=<%= $dm_thinpooldev -%>
<%- }else { -%>
<%- if $dm_datadev { %> --storage-opt dm.datadev=<%= $dm_datadev %><% } -%>
<%- if $dm_metadatadev { %> --storage-opt dm.metadatadev=<%= $dm_metadatadev %><% } -%>
<%- } -%>
<%- if $dm_use_deferred_removal { %> --storage-opt dm.use_deferred_removal=<%= $dm_use_deferred_removal %><% } -%>
<%- if $dm_use_deferred_deletion { %> --storage-opt dm.use_deferred_deletion=<%= $dm_use_deferred_deletion %><% } -%>
<%- if $dm_blkdiscard { %> --storage-opt dm.blkdiscard=<%= $dm_blkdiscard %><% } -%>
<%- if $dm_override_udev_sync_check { %> --storage-opt dm.override_udev_sync_check=<%= $dm_override_udev_sync_check %><% } -%>
<% } elsif $storage_driver == 'overlay2' { -%>
<%- if $overlay2_override_kernel_check { %> --storage-opt overlay2.override_kernel_check=<%= $overlay2_override_kernel_check %><% } -%>
<% } -%>
<% $labels.each |$label| { %> --label <%= $label %><% } -%>
<% if $extra_parameters { %><% $extra_parameters_array.each |$param| { %> <%= $param %><% } %><% } -%>
"
<% if $proxy { %>export http_proxy='<%= $proxy %>'
export https_proxy='<%= $proxy %>'<% } %>
<% if $no_proxy { %>export no_proxy='<%= $no_proxy %>'<% } %>
# This is also a handy place to tweak where Docker's temporary files go.
export TMPDIR="<%= $tmp_dir %>"
<% if $shell_values { %><% $shell_values_array.each |$param| { %>
<%= $param %><% } %><% } -%>

View file

@ -0,0 +1,56 @@
# This file is managed by Puppet and local changes
# may be overwritten
DOCKER_BINARY="/usr/bin/<%= $docker_command %>"
DOCKER_OPTS="<% -%>
<% if $root_dir { %> -g <%= $root_dir %><% } %>
<% if $tcp_bind { %><% $tcp_bind_array.each |$param| { %> -H <%= $param %><% } %><% } %>
<% if $tls_enable { %> --tls<% if $tls_verify { %> --tlsverify<% } %> --tlscacert=<%= $tls_cacert %> --tlscert=<%= $tls_cert %> --tlskey=<%= $tls_key %><% } %>
<% if $socket_bind { %> -H <%= $socket_bind %><% } %>
--ip-forward=<%= $ip_forward -%>
--iptables=<%= $iptables -%>
--ip-masq=<%= $ip_masq -%>
<% if $icc { %> --icc=<%= $icc %><% } %>
<% if $fixed_cidr { %> --fixed-cidr <%= $fixed_cidr %><% } %>
<% if $default_gateway { %> --default-gateway <%= $default_gateway %><% } %>
<% if $bridge { %> --bridge <%= $bridge %><% } %>
<% if $log_level { %> -l <%= $log_level %><% } %>
<% if $log_driver { %> --log-driver <%= $log_driver %><% } %>
<% if $log_driver { %><% if $log_opt { %><% $log_opt.each |$param| { %> --log-opt <%= $param %><% } %><% } %><% } %>
<% if $selinux_enabled { %> --selinux-enabled=<%= $selinux_enabled %><% } %>
<% if $socket_group { %> -G <%= $socket_group %><% } %>
<% if $dns { %><% $dns_array.each |$address| { %> --dns <%= $address %><% } %><% } %>
<% if $dns_search { %><% $dns_search_array.each |$domain| { %> --dns-search <%= $domain %><% } %><% } %>
<% if $execdriver { %> -e <%= $execdriver %><% } %>
<% if $storage_driver { %> --storage-driver=<%= $storage_driver %><% } %>
<% if $storage_driver == 'devicemapper' { -%>
<%- if $dm_basesize { %> --storage-opt dm.basesize=<%= $dm_basesize %><% } %>
<%- if $dm_fs { %> --storage-opt dm.fs=<%= $dm_fs %><% } %>
<%- if $dm_mkfsarg { %> --storage-opt "dm.mkfsarg=<%= $dm_mkfsarg %>"<% } %>
<%- if $dm_mountopt { %> --storage-opt dm.mountopt=<%= $dm_mountopt %><% } %>
<%- if $dm_blocksize { %> --storage-opt dm.blocksize=<%= $dm_blocksize %><% } %>
<%- if $dm_loopdatasize { %> --storage-opt dm.loopdatasize=<%= $dm_loopdatasize %><% } %>
<%- if $dm_loopmetadatasize { %> --storage-opt dm.loopmetadatasize=<%= $dm_loopmetadatasize %><% } %>
<%- if $dm_thinpooldev { %> --storage-opt dm.thinpooldev=<%= $dm_thinpooldev %><% }
else { %>
<%- if $dm_datadev { %> --storage-opt dm.datadev=<%= $dm_datadev %><% } %>
<%- if $dm_metadatadev { %> --storage-opt dm.metadatadev=<%= $dm_metadatadev %><% } %>
<% } %>
<%- if $dm_use_deferred_removal { %> --storage-opt dm.use_deferred_removal=<%= $dm_use_deferred_removal %><% } %>
<%- if $dm_use_deferred_deletion { %> --storage-opt dm.use_deferred_deletion=<%= $dm_use_deferred_deletion %><% } %>
<%- if $dm_blkdiscard { %> --storage-opt dm.blkdiscard=<%= $dm_blkdiscard %><% } %>
<%- if $dm_override_udev_sync_check { %> --storage-opt dm.override_udev_sync_check=<%= $dm_override_udev_sync_check %><% } %>
<% } elsif $storage_driver == 'overlay2' { -%>
<%- if $overlay2_override_kernel_check { %> --storage-opt overlay2.override_kernel_check=<%= $overlay2_override_kernel_check %><% } %>
<% } -%>
<% $labels.each |$label| { %> --label <%= $label %><% } %>
<% if $extra_parameters { %><% $extra_parameters_array.each |$param| { %> <%= $param %><% } %><% } %>
"
<% if $proxy { %>export http_proxy='<%= $proxy %>'
export https_proxy='<%= $proxy %>'<% } -%>
<% if $no_proxy { %>export no_proxy='<%= $no_proxy %>'<% } -%>
# This is also a handy place to tweak where Docker's temporary files go.
export TMPDIR="<%= $tmp_dir %>"
<% if $shell_values { %><% $shell_values_array.each |$param| { %>
<%= $param %><% } %><% } -%>

View file

@ -0,0 +1,70 @@
# Docker Upstart and SysVinit configuration file
#
# THIS FILE IS MANAGED BY PUPPET. Changes will be overwritten.
# # Customize location of Docker binary (especially for development testing).
DOCKER="/usr/bin/<%= $docker_command %>"
# # If you need Docker to use an HTTP proxy, it can also be specified here.
<% if $proxy { -%>
export http_proxy='<%= $proxy %>'
export https_proxy='<%= $proxy %>'
<% } -%>
<% if $no_proxy { -%>
export no_proxy='<%= $no_proxy.convert_to(Array).join(',') %>'
<% } -%>
# # This is also a handy place to tweak where Docker's temporary files go.
export TMPDIR="<%= $tmp_dir %>"
# # Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="\
<% if $root_dir { %><%= $root_dir_flag %> <%= $root_dir %><% } -%>
<% if $tcp_bind { %><% $tcp_bind_array.each |$param| { %> -H <%= $param %><% } %><% } -%>
<% if $tls_enable { %> --tls<% if $tls_verify { %> --tlsverify<% } %> --tlscacert=<%= $tls_cacert %> --tlscert=<%= $tls_cert %> --tlskey=<%= $tls_key %><% } -%>
<% if $socket_bind { %> -H <%= $socket_bind %><% } -%>
--ip-forward=<%= $ip_forward -%>
--iptables=<%= $iptables -%>
--ip-masq=<%= $ip_masq -%>
<% if $icc { %> --icc=<%= $icc %><% } -%>
<% if $fixed_cidr { %> --fixed-cidr <%= $fixed_cidr %><% } -%>
<% if $bridge { %> --bridge <%= $bridge %><% } -%>
<% if $default_gateway { %> --default-gateway <%= $default_gateway %><% } -%>
<% if $log_level { %> -l <%= $log_level %><% } -%>
<% if $log_driver { %> --log-driver <%= $log_driver %><% } -%>
<% if $log_driver { %><% if $log_opt { %><% $log_opt.each |$param| { %> --log-opt <%= $param %><% } %><% } -%><% } -%>
<% if $selinux_enabled { %> --selinux-enabled=<%= $selinux_enabled %><% } -%>
<% if $socket_group { %> -G <%= $socket_group %><% } -%>
<% if $dns { %><% $dns_array.each |$address| { %> --dns <%= $address %><% } %><% } -%>
<% if $dns_search { %><% $dns_search_array.each |$domain| { %> --dns-search <%= $domain %><% } %><% } -%>
<% if $execdriver { %> -e <%= $execdriver %><% } -%>
<% if $bip { %> --bip=<%= $bip %><% } -%>
<% if $mtu { %> --mtu=<%= $mtu %><% } -%>
<% if type($registry_mirror, 'generalized') == String { %> --registry-mirror=<%= $registry_mirror %><% } -%>
<% if String(type($registry_mirror, 'generalized')).index('Array') == 0 { %><% $registry_mirror.each |$param| { %> --registry-mirror=<%= $param %><% } %><% } -%>
<% if $storage_driver { %> --storage-driver=<%= $storage_driver %><% } -%>
<% if $storage_driver == 'devicemapper' { -%>
<%- if $dm_basesize { %> --storage-opt dm.basesize=<%= $dm_basesize %><% } -%>
<%- if $dm_fs { %> --storage-opt dm.fs=<%= $dm_fs %><% } -%>
<%- if $dm_mkfsarg { %> --storage-opt "dm.mkfsarg=<%= $dm_mkfsarg %>"<% } -%>
<%- if $dm_mountopt { %> --storage-opt dm.mountopt=<%= $dm_mountopt %><% } -%>
<%- if $dm_blocksize { %> --storage-opt dm.blocksize=<%= $dm_blocksize %><% } -%>
<%- if $dm_loopdatasize { %> --storage-opt dm.loopdatasize=<%= $dm_loopdatasize %><% } -%>
<%- if $dm_loopmetadatasize { %> --storage-opt dm.loopmetadatasize=<%= $dm_loopmetadatasize %><% } -%>
<%- if $dm_thinpooldev { %> --storage-opt dm.thinpooldev=<%= $dm_thinpooldev -%>
<%- }else { -%>
<%- if $dm_datadev { %> --storage-opt dm.datadev=<%= $dm_datadev %><% } -%>
<%- if $dm_metadatadev { %> --storage-opt dm.metadatadev=<%= $dm_metadatadev %><% } -%>
<%- } -%>
<%- if $dm_use_deferred_removal { %> --storage-opt dm.use_deferred_removal=<%= $dm_use_deferred_removal %><% } -%>
<%- if $dm_use_deferred_deletion { %> --storage-opt dm.use_deferred_deletion=<%= $dm_use_deferred_deletion %><% } -%>
<%- if $dm_blkdiscard { %> --storage-opt dm.blkdiscard=<%= $dm_blkdiscard %><% } -%>
<%- if $dm_override_udev_sync_check { %> --storage-opt dm.override_udev_sync_check=<%= $dm_override_udev_sync_check %><% } -%>
<% } elsif $storage_driver == 'overlay2' { -%>
<%- if $overlay2_override_kernel_check { %> --storage-opt overlay2.override_kernel_check=<%= $overlay2_override_kernel_check %><% } -%>
<% } -%>
<% $labels.each |$label| { %> --label <%= $label %><% } -%>
<% if $extra_parameters { %><% $extra_parameters_array.each |$param| { %> <%= $param %><% } %><% } -%>
"
<% if $shell_values { %><% $shell_values_array.each |$param| { %>
<%= $param %><% } %><% } -%>

View file

@ -0,0 +1,145 @@
<%-
$required_start = ["$network"] +
$sanitised_after_array.map |$s| { "${service_prefix}${s}"} +
$sanitised_depends_array.map |$s| { "${service_prefix}${s}"} +
$depend_services_array
$required_stop = ["$network"] +
$sanitised_depends_array.map |$d| { "${service_prefix}${d}"} +
$depend_services_array
-%>
#!/bin/sh
#
# This file is managed by Puppet and local changes
# may be overwritten
#
# /etc/rc.d/init.d/<servicename>
#
# Daemon for <%= $title %>
#
# chkconfig: 2345 97 15
# description: Docker container for <%= $title %>
### BEGIN INIT INFO
# Provides: <%= $service_prefix %><%= $sanitised_title %>
# Required-Start: <%= $required_start.unique.join(" ") %>
# Required-Stop: <%= $required_stop.unique.join(" ") %>
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop docker container for <%= $title %>
# Description: Docker container for <%= $title %>
### END INIT INFO
if [ -e /etc/init.d/functions ]; then
. /etc/init.d/functions
elif [ -e /lib/lsb/init-functions ]; then
. /lib/lsb/init-functions
failure() {
log_failure_msg "$@"
return 1
}
success() {
log_success_msg "$@"
return 0
}
else
failure() {
echo "fail: $@" >&2
exit 1
}
success() {
echo "success: $@" >&2
exit 0
}
fi
export HOME=/root/
docker="/usr/bin/<%= $docker_command %>"
prog="<%= $service_prefix %><%= $sanitised_title %>"
if [ -d /var/lock/subsys ]; then
lockfile="/var/lock/subsys/$prog"
else
unset lockfile
fi
start() {
[ -x $docker ] || exit 5
if [ "true" = "$($docker inspect --format='{{.State.Running}}' <%= $sanitised_title %> 2>/dev/null)" ]; then
failure
printf "Container <%= $sanitised_title %> is still running.\n"
exit 7
fi
printf "Starting $prog:\t"
<%= $docker_run_inline_start %>
retval=$?
echo
if [ $retval -eq 0 ]; then
success
else
failure
fi
}
stop() {
echo -n "Stopping $prog: "
<%= $docker_run_inline_stop %>
return $?
}
clean() {
if ! [ -f $cidfile ]; then
failure
echo
printf "$cidfile does not exist.\n"
else
cid="$(cat $cidfile)"
rm $cidfile
$docker rm -v -f $cid
retval=$?
return $retval
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
if [ "true" = "$($docker inspect --format='{{.State.Running}}' <%= $sanitised_title %> 2>/dev/null)" ]; then
echo $prog is running
exit 0
else
echo $prog not running
exit 1
fi
;;
restart|reload)
stop
start
;;
clean)
clean
;;
cleanRestart)
stop
clean
start
;;
condrestart)
[ -f /var/lock/subsys/$prog ] && restart || :
;;
*)
echo "Usage: $0 [start|stop|status|reload|restart|probe|clean|cleanRestart]"
exit 1
;;
esac
exit $?

View file

@ -0,0 +1,19 @@
# This file is managed by Puppet and local changes
# may be overwritten
# Edit this file to override any configuration options specified in
# /usr/lib/docker-storage-setup/docker-storage-setup.
#
# For more details refer to "man docker-storage-setup"
<% if $storage_driver { %>STORAGE_DRIVER=<%= $storage_driver %><% } %>
<% if $storage_devs { %>DEVS="<%= $storage_devs %>"<% } %>
<% if $storage_vg { %>VG=<%= $storage_vg %><% } %>
<% if $storage_root_size { %>ROOT_SIZE=<%= $storage_root_size %><% } %>
<% if $storage_data_size { %>DATA_SIZE=<%= $storage_data_size %><% } %>
<% if $storage_min_data_size { %>MIN_DATA_SIZE=<%= $storage_min_data_size %><% } %>
<% if $storage_chunk_size { %>CHUNK_SIZE=<%= $storage_chunk_size %><% } %>
<% if $storage_growpart { %>GROWPART=<%= $storage_growpart %><% } %>
<% if $storage_auto_extend_pool { %>AUTO_EXTEND_POOL=<%= $storage_auto_extend_pool %><% } %>
<% if $storage_pool_autoextend_threshold { %>POOL_AUTOEXTEND_THRESHOLD=<%= $storage_pool_autoextend_threshold %><% } %>
<% if $storage_pool_autoextend_percent { %>POOL_AUTOEXTEND_PERCENT=<%= $storage_pool_autoextend_percent %><% } %>

View file

@ -0,0 +1,39 @@
# This file is managed by Puppet and local changes
# may be overwritten
# This file may be automatically generated by an installation program.
# By default, Docker uses a loopback-mounted sparse file in
# /var/lib/docker. The loopback makes it slower, and there are some
# restrictive defaults, such as 100GB max storage.
# If your installation did not set a custom storage for Docker, you
# may do it below.
# Example: Use a custom pair of raw logical volumes (one for metadata,
# one for data).
# DOCKER_STORAGE_OPTIONS = --storage-opt dm.metadatadev=/dev/mylogvol/my-docker-metadata --storage-opt dm.datadev=/dev/mylogvol/my-docker-data
DOCKER_STORAGE_OPTIONS="<% -%>
<% if $storage_driver { %> --storage-driver=<%= $storage_driver %><% } -%>
<% if $storage_driver == 'devicemapper' { -%>
<%- if $dm_basesize { %> --storage-opt dm.basesize=<%= $dm_basesize %><% } -%>
<%- if $dm_fs { %> --storage-opt dm.fs=<%= $dm_fs %><% } -%>
<%- if $dm_mkfsarg { %> --storage-opt "dm.mkfsarg=<%= $dm_mkfsarg %>"<% } -%>
<%- if $dm_mountopt { %> --storage-opt dm.mountopt=<%= $dm_mountopt %><% } -%>
<%- if $dm_blocksize { %> --storage-opt dm.blocksize=<%= $dm_blocksize %><% } -%>
<%- if $dm_loopdatasize { %> --storage-opt dm.loopdatasize=<%= $dm_loopdatasize %><% } -%>
<%- if $dm_loopmetadatasize { %> --storage-opt dm.loopmetadatasize=<%= $dm_loopmetadatasize %><% } -%>
<%- if $dm_thinpooldev { %> --storage-opt dm.thinpooldev=<%= $dm_thinpooldev -%>
<%- }else { -%>
<%- if $dm_datadev { %> --storage-opt dm.datadev=<%= $dm_datadev %><% } -%>
<%- if $dm_metadatadev { %> --storage-opt dm.metadatadev=<%= $dm_metadatadev %><% } -%>
<%- } -%>
<%- if $dm_use_deferred_removal { %> --storage-opt dm.use_deferred_removal=<%= $dm_use_deferred_removal %><% } -%>
<%- if $dm_use_deferred_deletion { %> --storage-opt dm.use_deferred_deletion=<%= $dm_use_deferred_deletion %><% } -%>
<%- if $dm_blkdiscard { %> --storage-opt dm.blkdiscard=<%= $dm_blkdiscard %><% } -%>
<%- if $dm_override_udev_sync_check { %> --storage-opt dm.override_udev_sync_check=<%= $dm_override_udev_sync_check %><% } -%>
<% } elsif $storage_driver == 'overlay2' { -%>
<%- if $overlay2_override_kernel_check { %> --storage-opt overlay2.override_kernel_check=<%= $overlay2_override_kernel_check %><% } -%>
<% } -%>
"

View file

@ -0,0 +1,40 @@
# This file is managed by Puppet and local changes
# may be overwritten
DOCKER="/usr/bin/<%= $docker_command %>"
other_args="<% -%>
<% if $root_dir { %><%= $root_dir_flag %> <%= $root_dir %><% } -%>
<% if $tcp_bind { %><% $tcp_bind_array.each |$param| { %> -H <%= $param %><% } %><% } -%>
<% if $tls_enable { %> --tls<% if $tls_verify { %> --tlsverify<% } %> --tlscacert=<%= $tls_cacert %> --tlscert=<%= $tls_cert %> --tlskey=<%= $tls_key %><% } -%>
<% if $socket_bind { %> -H <%= $socket_bind %><% } -%>
--ip-forward=<%= $ip_forward -%>
--iptables=<%= $iptables -%>
--ip-masq=<%= $ip_masq -%>
<% if $icc { %> --icc=<%= $icc %><% } -%>
<% if $fixed_cidr { %> --fixed-cidr <%= $fixed_cidr %><% } -%>
<% if $bridge { %> --bridge <%= $bridge %><% } -%>
<% if $default_gateway { %> --default-gateway <%= $default_gateway %><% } -%>
<% if $ipv6 { %> --ipv6<% } -%>
<% if $ipv6_cidr { %> --fixed-cidr-v6 <%= $ipv6_cidr %><% } -%>
<% if $default_gateway_ipv6 { %> --default-gateway-v6 <%= $default_gateway_ipv6 %><% } -%>
<% if $log_level { %> -l <%= $log_level %><% } -%>
<% if $log_driver { %> --log-driver <%= $log_driver %><% } -%>
<% if $log_driver { %><% if $log_opt { %><% $log_opt.each |$param| { %> --log-opt <%= $param %><% } %><% } -%><% } -%>
<% if $selinux_enabled { %> --selinux-enabled=<%= $selinux_enabled %><% } -%>
<% if $socket_group { %> -G <%= $socket_group %><% } -%>
<% if $dns { %><% $dns_array.each |$address| { %> --dns <%= $address %><% } %><% } -%>
<% if $dns_search { %><% $dns_search_array.each |$domain| { %> --dns-search <%= $domain %><% } %><% } -%>
<% if $execdriver { %> -e <%= $execdriver %><% } -%>
<% if $bip { %> --bip=<%= $bip %><% } -%>
<% if $mtu { %> --mtu=<%= $mtu %><% } -%>
<% $labels.each |$label| { %> --label <%= $label %><% } -%>
<% if $extra_parameters { %><% $extra_parameters_array.each |$param| { %> <%= $param %><% } %><% } -%>"
<% if $proxy { %>export http_proxy='<%= $proxy %>'
export https_proxy='<%= $proxy %>'<% } %>
<% if $no_proxy { %>export no_proxy='<%= $no_proxy %>'<% } %>
# This is also a handy place to tweak where Docker's temporary files go.
export TMPDIR="<%= $tmp_dir %>"
<% if $shell_values { %><% $shell_values_array.each |$param| { %>
<%= $param %><% } %><% } -%>

View file

@ -0,0 +1,38 @@
# This file is managed by Puppet and local changes
# may be overwritten
OPTIONS="<% if $root_dir { %><%= $root_dir_flag %> <%= $root_dir %><% } -%>
<% if $tcp_bind { %><% $tcp_bind_array.each |$param| { %> -H <%= $param %><% } %><% } -%>
<% if $tls_enable { %> --tls<% if $tls_verify { %> --tlsverify<% } %> --tlscacert=<%= $tls_cacert %> --tlscert=<%= $tls_cert %> --tlskey=<%= $tls_key %><% } -%>
<% if $socket_bind { %> -H <%= $socket_bind %><% } -%>
--ip-forward=<%= $ip_forward -%>
--iptables=<%= $iptables -%>
--ip-masq=<%= $ip_masq -%>
<% if $icc { %> --icc=<%= $icc %><% } -%>
<% if type($registry_mirror, 'generalized') == String { %> --registry-mirror=<%= $registry_mirror %><% } -%>
<% if String(type($registry_mirror, 'generalized')).index('Array') == 0 { %><% $registry_mirror.each |$param| { %> --registry-mirror=<%= $param %><% } %><% } -%>
<% if $fixed_cidr { %> --fixed-cidr <%= $fixed_cidr %><% } -%>
<% if $default_gateway { %> --default-gateway <%= $default_gateway %><% } -%>
<% if $ipv6 { %> --ipv6<% } -%>
<% if $ipv6_cidr { %> --fixed-cidr-v6 <%= $ipv6_cidr %><% } -%>
<% if $default_gateway_ipv6 { %> --default-gateway-v6 <%= $default_gateway_ipv6 %><% } -%>
<% if $bridge { %> --bridge <%= $bridge %><% } -%>
<% if $log_level { %> -l <%= $log_level %><% } -%>
<% if $log_driver { %> --log-driver <%= $log_driver %><% } -%>
<% if $log_driver { %><% if $log_opt { %><% $log_opt.each |$param| { %> --log-opt <%= $param %><% } %><% } -%><% } -%>
<% if $selinux_enabled { %> --selinux-enabled=<%= $selinux_enabled %><% } -%>
<% if $socket_group { %> -G <%= $socket_group %><% } -%>
<% if $dns { %><% $dns_array.each |$address| { %> --dns <%= $address %><% } %><% } -%>
<% if $dns_search { %><% $dns_search_array.each |$domain| { %> --dns-search <%= $domain %><% } %><% } -%>
<% if $execdriver { %> -e <%= $execdriver %><% } -%>
<% if $bip { %> --bip=<%= $bip %><% } -%>
<% if $mtu { %> --mtu=<%= $mtu %><% } -%>
<% if $labels { %><% $labels_array.each |$label| { %> --label <%= $label %><% } %><% } -%>
<% if $extra_parameters { %><% $extra_parameters_array.each |$param| { %> <%= $param %><% } %><% } -%>"
<% if $proxy { %>http_proxy='<%= $proxy %>'
https_proxy='<%= $proxy %>'<% } %>
<% if $no_proxy { %>no_proxy='<%= $no_proxy %>'<% } %>
# This is also a handy place to tweak where Docker's temporary files go.
<% if $tmp_dir_config { %>TMPDIR="<%= $tmp_dir %>"<% }else { %># TMPDIR="<%= $tmp_dir %>"<% } %>
<% if $shell_values { %><% $shell_values_array.each |$param| { %> <%= $param %><% } %><% } -%>

View file

@ -0,0 +1,63 @@
<%-
$depend_services = $depend_services_array.map |$s| { if $s =~ /\.[a-z]+$/ { $s }else { "${s}.service" } }
$after = $sanitised_after_array.map |$s| { "${service_prefix}${s}.service" } +
$sanitised_depends_array.map |$s| { "${service_prefix}${s}.service"} +
$depend_services
$wants = $sanitised_after_array.map |$a| { "${service_prefix}${a}.service" }
$requires = $sanitised_depends_array.map |$d| { "${service_prefix}${d}.service" } +
$depend_services
-%>
# This file is managed by Puppet and local changes
# may be overwritten
[Unit]
Description=Daemon for <%= $title %>
After=<%= $after.unique.join(" ") %>
Wants=<%= $wants.unique.join(" ") %>
Requires=<%= $requires.unique.join(" ") %>
<%- if $have_systemd_v230 { -%>
StartLimitIntervalSec=20
StartLimitBurst=3
<% } -%>
<%- if $extra_systemd_parameters['Unit'] { -%>
<%- $extra_systemd_parameters['Unit'].each |$key, $value| { %>
<%= $key %>=<%= $value %>
<%- } -%>
<% } -%>
[Service]
Restart=<%= $systemd_restart %>
<%- unless $have_systemd_v230 { -%>
StartLimitInterval=20
StartLimitBurst=3
<% } -%>
TimeoutStartSec=0
RestartSec=5
Environment="HOME=/root"
<%- if $_syslog_identifier { -%>
SyslogIdentifier=<%= $_syslog_identifier %>
<% } -%>
<%- if $syslog_facility { -%>
SyslogFacility=<%= $syslog_facility %>
<% } -%>
ExecStart=/usr/local/bin/docker-run-<%= $sanitised_title %>-start.sh
ExecStop=-/usr/local/bin/docker-run-<%= $sanitised_title %>-stop.sh
<%- if $remain_after_exit { %>
RemainAfterExit=<%= $remain_after_exit %>
<% } -%>
<%- if $extra_systemd_parameters['Service'] { -%>
<%- $extra_systemd_parameters['Service'].each |$key, $value| { -%>
<%= $key %>=<%= $value %>
<%- } -%>
<% } -%>
[Install]
WantedBy=multi-user.target
<%- if $service_name { -%>
WantedBy=<%= $service_name %>.service
<% } -%>
<%- if $extra_systemd_parameters['Install'] { -%>
<%- $extra_systemd_parameters['Install'].each |$key, $value| { -%>
<%= $key %>=<%= $value %>
<%- } -%>
<% } -%>

View file

@ -0,0 +1,11 @@
<% if $service_after_override { -%>
[Unit]
After=<%= $service_after_override %>
<% } -%>
[Service]
EnvironmentFile=-/etc/default/docker
EnvironmentFile=-/etc/default/docker-storage
ExecStart=
ExecStart=/usr/bin/<%= $docker_start_command %> $OPTIONS \
$DOCKER_STORAGE_OPTIONS

View file

@ -0,0 +1,17 @@
<% if $service_after_override { -%>
[Unit]
After=<%= $service_after_override %>
<% } -%>
[Service]
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
<% if $daemon_environment_files { %><% $daemon_environment_files.each |$param| { %>EnvironmentFile=-<%= $param %>
<% } %><% } -%>
ExecStart=
ExecStart=/usr/bin/<%= $docker_start_command %> $OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY

View file

@ -0,0 +1,2 @@
[Socket]
SocketGroup=<%= $socket_group %>

View file

@ -0,0 +1,23 @@
#!/bin/bash
#
# Pulls a docker image.
# Returns 0 if there a change.
# Returns 2 if there is no change.
# Returns 3 if something when wrong.
#
DOCKER_IMAGE=$1
BEFORE=$(docker inspect --type image --format='{{.Id}}' ${DOCKER_IMAGE} 2>/dev/null)
<%= $docker_command %> pull ${DOCKER_IMAGE}
AFTER=$(docker inspect --type image --format='{{.Id}}' ${DOCKER_IMAGE} 2>/dev/null)
if [[ -z $AFTER ]]; then
echo "Docker image ${DOCKER_IMAGE} failed to pull!"
exit 3
elif [[ $BEFORE == $AFTER ]]; then
echo "No updates to ${DOCKER_IMAGE} available. Currently on ${AFTER}."
exit 2
else
echo "${DOCKER_IMAGE} updated. Changed from ${BEFORE} to ${AFTER}."
exit 0
fi

Some files were not shown because too many files have changed in this diff Show more