6351-6400 of 10000 results (46ms)
2021-07-07 §
07:03 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
2021-07-06 §
18:34 <otto@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'eventgate-analytics' for release 'production' . [production]
18:34 <otto@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'eventgate-analytics' for release 'canary' . [production]
18:03 <otto@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'eventgate-analytics' for release 'canary' . [production]
18:03 <otto@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'eventgate-analytics' for release 'production' . [production]
17:25 <joal@deploy1002> Finished deploy [analytics/refinery@419d1f0] (hadoop-test): Analytics deploy for Gobblin replacing Camus - HADOOP-TEST [analytics/refinery@419d1f0] (duration: 05m 31s) [production]
17:20 <joal@deploy1002> Started deploy [analytics/refinery@419d1f0] (hadoop-test): Analytics deploy for Gobblin replacing Camus - HADOOP-TEST [analytics/refinery@419d1f0] [production]
17:19 <joal@deploy1002> Finished deploy [analytics/refinery@419d1f0] (thin): Analytics deploy for Gobblin replacing Camus - THIN [analytics/refinery@419d1f0] (duration: 00m 07s) [production]
17:19 <joal@deploy1002> Started deploy [analytics/refinery@419d1f0] (thin): Analytics deploy for Gobblin replacing Camus - THIN [analytics/refinery@419d1f0] [production]
17:19 <joal@deploy1002> Finished deploy [analytics/refinery@419d1f0]: Analytics deploy for Gobblin replacing Camus [analytics/refinery@419d1f0] (duration: 36m 59s) [production]
16:42 <joal@deploy1002> Started deploy [analytics/refinery@419d1f0]: Analytics deploy for Gobblin replacing Camus [analytics/refinery@419d1f0] [production]
15:54 <otto@deploy1002> Finished deploy [analytics/refinery@a8e79f3] (hadoop-test): analytics test cluster deploy for webrequest_test gobblin job migration (duration: 05m 24s) [production]
15:48 <otto@deploy1002> Started deploy [analytics/refinery@a8e79f3] (hadoop-test): analytics test cluster deploy for webrequest_test gobblin job migration [production]
14:00 <marostegui@cumin1001> dbctl commit (dc=all): 'db2072 (re)pooling @ 100%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16777 and previous config saved to /var/cache/conftool/dbconfig/20210706-140049-root.json [production]
13:53 <otto@cumin1001> END (PASS) - Cookbook sre.aqs.roll-restart (exit_code=0) [production]
13:49 <otto@cumin1001> START - Cookbook sre.aqs.roll-restart [production]
13:49 <otto@cumin1001> END (FAIL) - Cookbook sre.aqs.roll-restart (exit_code=99) [production]
13:49 <otto@cumin1001> START - Cookbook sre.aqs.roll-restart [production]
13:45 <marostegui@cumin1001> dbctl commit (dc=all): 'db2072 (re)pooling @ 75%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16776 and previous config saved to /var/cache/conftool/dbconfig/20210706-134545-root.json [production]
13:30 <marostegui@cumin1001> dbctl commit (dc=all): 'db2072 (re)pooling @ 50%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16775 and previous config saved to /var/cache/conftool/dbconfig/20210706-133041-root.json [production]
13:15 <marostegui@cumin1001> dbctl commit (dc=all): 'db2072 (re)pooling @ 25%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16774 and previous config saved to /var/cache/conftool/dbconfig/20210706-131537-root.json [production]
12:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 100%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16773 and previous config saved to /var/cache/conftool/dbconfig/20210706-120242-root.json [production]
11:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2072', diff saved to https://phabricator.wikimedia.org/P16772 and previous config saved to /var/cache/conftool/dbconfig/20210706-115820-marostegui.json [production]
11:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1118', diff saved to https://phabricator.wikimedia.org/P16771 and previous config saved to /var/cache/conftool/dbconfig/20210706-115732-marostegui.json [production]
11:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 75%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16770 and previous config saved to /var/cache/conftool/dbconfig/20210706-114739-root.json [production]
11:32 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 50%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16769 and previous config saved to /var/cache/conftool/dbconfig/20210706-113235-root.json [production]
11:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2071 (re)pooling @ 25%: Repool after index change', diff saved to https://phabricator.wikimedia.org/P16768 and previous config saved to /var/cache/conftool/dbconfig/20210706-111731-root.json [production]
11:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2071', diff saved to https://phabricator.wikimedia.org/P16767 and previous config saved to /var/cache/conftool/dbconfig/20210706-111635-marostegui.json [production]
10:19 <moritzm> installing jackson-databind security updates on buster [production]
09:01 <_joe_> repooling wdqs1007 now that lag has caught up [production]
08:43 <moritzm> installing libuv1 security updates on buster [production]
07:06 <marostegui> Upgrade db1104 kernel [production]
06:54 <moritzm> installing PHP 7.3 securiy updates on buster [production]
06:50 <marostegui> Upgrade db1122 kernel [production]
06:35 <marostegui> Upgrade db1138 kernel [production]
06:31 <marostegui> Upgrade db1160 kernel [production]
00:56 <eileen> process-control config revision is 8d46b52ed4 [production]
2021-07-05 §
17:40 <legoktm> published fixed docker-registry.discovery.wmnet/nodejs10-devel:0.0.4 image (T286212) [production]
15:24 <_joe_> leaving wdqs1007 depooled so that the updater can recover faster, now at 16.5 hours of lag [production]
14:01 <moritzm> uploaded nginx 1.13.9-1+wmf3 for stretch-wikimedoa [production]
12:50 <marostegui> Stop MySQL on db1117:3321 to clone db1125 T286042 [production]
11:29 <moritzm> installing openexr security updates on stretch [production]
11:07 <moritzm> installing tiff security updates on stretch [production]
10:48 <moritzm> upgrading PHP on miscweb* [production]
10:37 <jbond> enable puppet fleet wide to post puppetdb change [production]
10:29 <marostegui> Optimize ruwiki.logging on s6 eqiad with replication T286102 [production]
10:27 <jbond> disable puppet fleet wide to preforem puppetdb change [production]
08:15 <moritzm> rolling out debmonitor-client 0.3.0 [production]
08:03 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on releases1002.eqiad.wmnet with reason: bump CPU count [production]
08:03 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 0:30:00 on releases1002.eqiad.wmnet with reason: bump CPU count [production]