7001-7050 of 10000 results (109ms)
2022-12-08 ยง
10:35 <steve_munene> batch restarting varnishkafka-eventlogging.service in batches of 3 30 seconds in between [production]
10:35 <steve_munene> batch restarting varnishkafka-eventlogging.service in batches of 3 30 seconds in between [production]
10:28 <ladsgroup@deploy1002> ladsgroup and ladsgroup: Backport for [[gerrit:865828|Set externallinks migration to WRITE_BOTH in testwiki (T321662)]] synced to the testservers: mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet [production]
10:27 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P42606 and previous config saved to /var/cache/conftool/dbconfig/20221208-102754-ladsgroup.json [production]
10:26 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:865828|Set externallinks migration to WRITE_BOTH in testwiki (T321662)]] [production]
10:25 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5005.eqsin.wmnet [production]
10:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2168:3317 (T322618)', diff saved to https://phabricator.wikimedia.org/P42605 and previous config saved to /var/cache/conftool/dbconfig/20221208-102308-ladsgroup.json [production]
10:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2168:3317 (T322618)', diff saved to https://phabricator.wikimedia.org/P42604 and previous config saved to /var/cache/conftool/dbconfig/20221208-102052-ladsgroup.json [production]
10:20 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2168.codfw.wmnet with reason: Maintenance [production]
10:20 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2168.codfw.wmnet with reason: Maintenance [production]
10:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159 (T322618)', diff saved to https://phabricator.wikimedia.org/P42603 and previous config saved to /var/cache/conftool/dbconfig/20221208-102030-ladsgroup.json [production]
10:18 <hashar> contint1002: activated Icinga monitoring , all services are up and running # T313832 [production]
10:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P42602 and previous config saved to /var/cache/conftool/dbconfig/20221208-101247-ladsgroup.json [production]
10:05 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5005.eqsin.wmnet [production]
10:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P42600 and previous config saved to /var/cache/conftool/dbconfig/20221208-100524-ladsgroup.json [production]
10:01 <claime> Deploying puppet enforcement of zuul-merger on contint1002 [production]
09:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T322618)', diff saved to https://phabricator.wikimedia.org/P42599 and previous config saved to /var/cache/conftool/dbconfig/20221208-095741-ladsgroup.json [production]
09:57 <slyngshede@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host test-reimage2001.codfw.wmnet [production]
09:56 <steve_munene> restarting varnishkafka-webrequest.service on host cp1075 T323771 [production]
09:50 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P42598 and previous config saved to /var/cache/conftool/dbconfig/20221208-095017-ladsgroup.json [production]
09:50 <slyngshede@cumin1001> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) test-reimage2001.codfw.wmnet on all recursors [production]
09:50 <slyngshede@cumin1001> START - Cookbook sre.dns.wipe-cache test-reimage2001.codfw.wmnet on all recursors [production]
09:50 <slyngshede@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:50 <slyngshede@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM test-reimage2001.codfw.wmnet - slyngshede@cumin1001" [production]
09:49 <slyngshede@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM test-reimage2001.codfw.wmnet - slyngshede@cumin1001" [production]
09:46 <slyngshede@cumin1001> START - Cookbook sre.dns.netbox [production]
09:46 <slyngshede@cumin1001> START - Cookbook sre.ganeti.makevm for new host test-reimage2001.codfw.wmnet [production]
09:43 <hashar> contint1002: stopped puppet and manually started zuul-merger. I am monitoring it cause last time we have bring up a new one it had some issues here and there # T313832 [production]
09:38 <hashar> contint1001: manually stopped and masked zuul-merger. It is under maintenance mode in Icinga # T313832 [production]
09:35 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159 (T322618)', diff saved to https://phabricator.wikimedia.org/P42597 and previous config saved to /var/cache/conftool/dbconfig/20221208-093511-ladsgroup.json [production]
09:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2159 (T322618)', diff saved to https://phabricator.wikimedia.org/P42596 and previous config saved to /var/cache/conftool/dbconfig/20221208-093255-ladsgroup.json [production]
09:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
09:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
09:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
09:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
09:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T322618)', diff saved to https://phabricator.wikimedia.org/P42595 and previous config saved to /var/cache/conftool/dbconfig/20221208-093218-ladsgroup.json [production]
09:25 <hashar@deploy1002> Finished deploy [zuul/deploy@4c6859c]: Install Zuul virtualenv on contint1002 # T313832 (duration: 00m 07s) [production]
09:24 <hashar@deploy1002> Started deploy [zuul/deploy@4c6859c]: Install Zuul virtualenv on contint1002 # T313832 [production]
09:17 <hashar@deploy1002> Finished deploy [integration/docroot@2e0d44b]: Warm up contint1002 and test php-fpm restart # T313832 (duration: 00m 03s) [production]
09:17 <hashar@deploy1002> Started deploy [integration/docroot@2e0d44b]: Warm up contint1002 and test php-fpm restart # T313832 [production]
09:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P42594 and previous config saved to /var/cache/conftool/dbconfig/20221208-091712-ladsgroup.json [production]
09:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P42593 and previous config saved to /var/cache/conftool/dbconfig/20221208-090205-ladsgroup.json [production]
08:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1158 (T322618)', diff saved to https://phabricator.wikimedia.org/P42592 and previous config saved to /var/cache/conftool/dbconfig/20221208-085724-ladsgroup.json [production]
08:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
08:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
08:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1127 (T322618)', diff saved to https://phabricator.wikimedia.org/P42591 and previous config saved to /var/cache/conftool/dbconfig/20221208-085657-ladsgroup.json [production]
08:46 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T322618)', diff saved to https://phabricator.wikimedia.org/P42590 and previous config saved to /var/cache/conftool/dbconfig/20221208-084659-ladsgroup.json [production]
08:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2150 (T322618)', diff saved to https://phabricator.wikimedia.org/P42589 and previous config saved to /var/cache/conftool/dbconfig/20221208-084442-ladsgroup.json [production]