1301-1350 of 10000 results (25ms)
2020-11-18 §
09:13 <jbond42> renew puppet certificate of seaborgium [production]
08:34 <marostegui> Stop MySQL on es1011, es1012, es1014 T268100 T268101 T268102 [production]
08:29 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es1012 from dbctl T268101', diff saved to https://phabricator.wikimedia.org/P13326 and previous config saved to /var/cache/conftool/dbconfig/20201118-082942-marostegui.json [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es1012 before decommissioning it', diff saved to https://phabricator.wikimedia.org/P13325 and previous config saved to /var/cache/conftool/dbconfig/20201118-082636-marostegui.json [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 100%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13324 and previous config saved to /var/cache/conftool/dbconfig/20201118-082618-root.json [production]
08:11 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 80%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13323 and previous config saved to /var/cache/conftool/dbconfig/20201118-081115-root.json [production]
07:56 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 75%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13322 and previous config saved to /var/cache/conftool/dbconfig/20201118-075612-root.json [production]
07:45 <marostegui> Deploy schema change on db1098:3316 T267335 T267399 [production]
07:41 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 60%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13321 and previous config saved to /var/cache/conftool/dbconfig/20201118-074108-root.json [production]
07:28 <Urbanecm> Start of mwscript extensions/AbuseFilter/maintenance/updateVarDumps.php --wiki=$wiki --print-orphaned-records-to=/tmp/urbanecm/$wiki-orphaned.log --progress-markers > $wiki.log in a tmux at mwmaint1002 (wiki=nlwiki; T246539) [production]
07:26 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 50%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13320 and previous config saved to /var/cache/conftool/dbconfig/20201118-072605-root.json [production]
07:16 <marostegui> Run check table on s6 on db1125:3316 T267090 [production]
07:11 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 30%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13319 and previous config saved to /var/cache/conftool/dbconfig/20201118-071101-root.json [production]
06:55 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 25%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13318 and previous config saved to /var/cache/conftool/dbconfig/20201118-065558-root.json [production]
06:53 <elukey> restart also mirror maker on kafka-main1001/1003 (seems not related but just to clear old errors and a possible weird state) [production]
06:45 <marostegui@cumin1001> dbctl commit (dc=all): 'es1018 (re)pooling @ 100%: Slowly pool es1018 after cloning es1032 T261717', diff saved to https://phabricator.wikimedia.org/P13317 and previous config saved to /var/cache/conftool/dbconfig/20201118-064556-root.json [production]
06:40 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 20%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13316 and previous config saved to /var/cache/conftool/dbconfig/20201118-064054-root.json [production]
06:37 <elukey> restart kafka-mirror-main-codfw_to_main-eqiad@0.service on kafka-main1002 - consumer msg rate low since kafka-main2003 went down for codfw c7 failure [production]
06:30 <marostegui@cumin1001> dbctl commit (dc=all): 'es1018 (re)pooling @ 75%: Slowly pool es1018 after cloning es1032 T261717', diff saved to https://phabricator.wikimedia.org/P13315 and previous config saved to /var/cache/conftool/dbconfig/20201118-063052-root.json [production]
06:25 <marostegui@cumin1001> dbctl commit (dc=all): 'es1032 (re)pooling @ 10%: Slowly pool es1032 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13314 and previous config saved to /var/cache/conftool/dbconfig/20201118-062551-root.json [production]
06:25 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es1014 from dbctl', diff saved to https://phabricator.wikimedia.org/P13313 and previous config saved to /var/cache/conftool/dbconfig/20201118-062547-marostegui.json [production]
06:15 <marostegui@cumin1001> dbctl commit (dc=all): 'es1018 (re)pooling @ 50%: Slowly pool es1018 after cloning es1032 T261717', diff saved to https://phabricator.wikimedia.org/P13312 and previous config saved to /var/cache/conftool/dbconfig/20201118-061549-root.json [production]
06:13 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es1014 before decommissioning it', diff saved to https://phabricator.wikimedia.org/P13311 and previous config saved to /var/cache/conftool/dbconfig/20201118-061340-marostegui.json [production]
06:12 <marostegui@cumin1001> dbctl commit (dc=all): 'Set es1027 as new es1 master', diff saved to https://phabricator.wikimedia.org/P13310 and previous config saved to /var/cache/conftool/dbconfig/20201118-061218-marostegui.json [production]
06:11 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es1011 from dbctl', diff saved to https://phabricator.wikimedia.org/P13309 and previous config saved to /var/cache/conftool/dbconfig/20201118-061112-marostegui.json [production]
06:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool es1032 with minimum weight on es1 T261717', diff saved to https://phabricator.wikimedia.org/P13308 and previous config saved to /var/cache/conftool/dbconfig/20201118-060641-marostegui.json [production]
06:00 <marostegui@cumin1001> dbctl commit (dc=all): 'es1018 (re)pooling @ 25%: Slowly pool es1018 after cloning es1032 T261717', diff saved to https://phabricator.wikimedia.org/P13307 and previous config saved to /var/cache/conftool/dbconfig/20201118-060045-root.json [production]
05:47 <marostegui> Run check table on enwiki on db1124:3311 T267090 [production]
05:45 <marostegui@cumin1001> dbctl commit (dc=all): 'es1018 (re)pooling @ 10%: Slowly pool es1018 after cloning es1032 T261717', diff saved to https://phabricator.wikimedia.org/P13306 and previous config saved to /var/cache/conftool/dbconfig/20201118-054542-root.json [production]
00:53 <tgr_> also deployed [[gerrit:641294|Suggested Edits: Guard against task type not existing (T268012)]] [production]
00:52 <tgr@deploy1001> Synchronized php-1.36.0-wmf.18/extensions/GrowthExperiments/includes/HomepageModules/SuggestedEdits.php: Backport: [[gerrit:641295|Suggested edits: Guard against empty topic data (T268015)]] (duration: 01m 07s) [production]
00:27 <tgr@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:641250|Enable watchlist expiry feature on Wikidata & Commons (T266874)]] (duration: 01m 03s) [production]
2020-11-17 §
22:54 <mforns@deploy1001> Finished deploy [analytics/refinery@f19d20c] (thin): Regular analytics weekly train THIN [analytics/refinery@f19d20c21ada05df230d00c6e0022a7d5c356c13] (duration: 00m 07s) [production]
22:54 <mforns@deploy1001> Started deploy [analytics/refinery@f19d20c] (thin): Regular analytics weekly train THIN [analytics/refinery@f19d20c21ada05df230d00c6e0022a7d5c356c13] [production]
22:53 <mforns@deploy1001> Finished deploy [analytics/refinery@f19d20c]: Regular analytics weekly train [analytics/refinery@f19d20c21ada05df230d00c6e0022a7d5c356c13] (duration: 12m 51s) [production]
22:45 <clarakosi@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'mathoid' for release 'production' . [production]
22:40 <mforns@deploy1001> Started deploy [analytics/refinery@f19d20c]: Regular analytics weekly train [analytics/refinery@f19d20c21ada05df230d00c6e0022a7d5c356c13] [production]
22:39 <clarakosi@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'mathoid' for release 'production' . [production]
22:29 <clarakosi@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'mathoid' for release 'staging' . [production]
22:10 <mutante> otrs1001 - systemctl start otrs-cache-cleanup [production]
22:08 <ppchelko@deploy1001> Finished deploy [restbase/deploy@8363aeb]: update to service-runner 2.8.0, everywhere (duration: 11m 07s) [production]
22:07 <mutante> otrs1001 - removing otrs-cache-cleanup cron from otrs's crontab - adding same command as systemd timer. gerrit:637038 T265138 [production]
21:57 <ppchelko@deploy1001> Started deploy [restbase/deploy@8363aeb]: update to service-runner 2.8.0, everywhere [production]
21:32 <ppchelko@deploy1001> Finished deploy [restbase/deploy@8363aeb]: update to service-runner 2.8.0, codfw (duration: 07m 11s) [production]
21:24 <ppchelko@deploy1001> Started deploy [restbase/deploy@8363aeb]: update to service-runner 2.8.0, codfw [production]
20:56 <dancy@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.36.0-wmf.18 [production]
20:42 <Urbanecm> End of mwscript extensions/AbuseFilter/maintenance/updateVarDumps.php --wiki=$wiki --print-orphaned-records-to=/tmp/urbanecm/$wiki-orphaned.log --progress-markers > $wiki.log in a tmux at mwmaint1002 (wiki=itwiki; T246539) [production]
20:31 <dancy@deploy1001> Finished scap: testwikis wikis to 1.36.0-wmf.18 (duration: 39m 37s) [production]
19:58 <pt1979@cumin2001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
19:56 <pt1979@cumin2001> START - Cookbook sre.hosts.downtime [production]