1001-1050 of 10000 results (23ms)
2020-04-17 §
15:52 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-main' for release 'production' . [production]
15:48 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics' for release 'canary' . [production]
15:48 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics' for release 'production' . [production]
15:42 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics-external' for release 'canary' . [production]
15:41 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics-external' for release 'production' . [production]
15:20 <rzl> remove cronjobs from mwmaint1002 previously updated to systemd timers and erroneously left in crontab -- diffs: https://phabricator.wikimedia.org/P11012 T211250 [production]
14:29 <mutante> ganeti2001 - kileld and restarted gnt-rapi process with the correct new key and cert [production]
14:19 <cdanis> add peer AS29802 to cr2-eqdfw and cr2-esams [production]
14:01 <mutante> netbox1001 - netbox_ganeti_eqiad_synx / systemd state fixed after gnt-rapi is runnign again on ganeti1003 [production]
14:00 <mutante> ganeti1003 - fixing gnt-rapi daemon not running [production]
13:54 <mateusbs17> Running VACUUM FULL for gis DB in maps2004.codfw.wmnet (which is depooled at the moment) [production]
13:00 <mutante> netbox1001 - sudo systemctl start netbox_ganeti_eqiad_sync (was failed) [production]
12:54 <mutante> contint2001 /usr/local/sbin/build-envoy-config -c /etc/envoy ; restart envoyproxy; was not listening on admin port [production]
12:45 <mutante> cntint2001 - restart nagios-nrpe-server [production]
12:28 <moritzm> copied kubernetes-client from stretch-wikimedia to buster-wikimedia T224591 [production]
11:35 <mutante> contint2001 - apt-get update, run puppet to install helm-diff [production]
11:33 <jayme> imported helm-diff 2.11.0+3-2+deb10u1 to main for buster-wikimedia [production]
11:23 <dzahn@cumin2001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) [production]
11:23 <dzahn@cumin2001> START - Cookbook sre.hosts.decommission [production]
11:22 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
11:21 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
11:20 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
11:20 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
10:17 <_joe_> contint1001:~$ sudo systemctl restart envoyproxy.service [production]
10:16 <_joe_> contint1001:~$ sudo /usr/local/sbin/build-envoy-config -c /etc/envoy [production]
10:07 <kormat> change pc2010 to replicate from pc1010 T247787 [production]
09:54 <kormat> enabling replication from pc1007 to pc1010 T247787 [production]
09:20 <jayme> imported helm 2.12.2 to main for buster-wikimedia [production]
09:07 <vgutierrez> disable KA between ats-tls and varnish-fe on cp1077 - T248938 [production]
09:00 <kormat> dropping wikidatawiki.wb_items_per_site_old table in eqiad (non-labs hosts) T250345 [production]
08:15 <kormat> dropping wikidatawiki.wb_items_per_site_old table in codfw T250345 [production]
07:54 <ema> cache_text: puppet run to stop vhtcpd and start purged T249325 [production]
07:45 <gehel> restart wdqs-updater on all nodes after deployment [production]
06:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1092 after compression', diff saved to https://phabricator.wikimedia.org/P11005 and previous config saved to /var/cache/conftool/dbconfig/20200417-063138-marostegui.json [production]
06:30 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1111 from API', diff saved to https://phabricator.wikimedia.org/P11004 and previous config saved to /var/cache/conftool/dbconfig/20200417-063038-marostegui.json [production]
06:26 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1092 after compression', diff saved to https://phabricator.wikimedia.org/P11003 and previous config saved to /var/cache/conftool/dbconfig/20200417-062642-marostegui.json [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1092 after compression', diff saved to https://phabricator.wikimedia.org/P11002 and previous config saved to /var/cache/conftool/dbconfig/20200417-061907-marostegui.json [production]
06:04 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1092 after compression', diff saved to https://phabricator.wikimedia.org/P11001 and previous config saved to /var/cache/conftool/dbconfig/20200417-060419-marostegui.json [production]
2020-04-16 §
22:34 <maryum> reindexing wikis that failed from previous reindex on mwmain1002 [production]
22:10 <jforrester@deploy1001> Pruned MediaWiki: 1.35.0-wmf.26 (duration: 05m 26s) [production]
21:59 <jforrester@deploy1001> Synchronized php-1.35.0-wmf.28/extensions/FlaggedRevs/: T250439 Don't try to create a Revision with null (duration: 01m 02s) [production]
21:54 <bsitzmann@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'wikifeeds' for release 'production' . [production]
21:51 <bsitzmann@deploy1001> helmfile [EQIAD] Ran 'apply' command on namespace 'wikifeeds' for release 'production' . [production]
21:48 <mholloway-shell@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'wikifeeds' for release 'staging' . [production]
20:42 <mstyles@deploy1001> Finished deploy [wdqs/wdqs@1fb52b3]: WDQS version 0.3.22 (duration: 11m 43s) [production]
20:30 <mstyles@deploy1001> Started deploy [wdqs/wdqs@1fb52b3]: WDQS version 0.3.22 [production]
20:01 <maryum> "beginning deploy of WDQS 0.3.22" [production]
19:06 <jforrester@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.35.0-wmf.28 [production]
18:57 <krinkle@deploy1001> Synchronized errorpages/404.php: I9fd5c99130c64 (duration: 01m 07s) [production]
17:52 <XioNoX> rename/format asw-ulsfo interfaces to match future homer driven format [production]