1151-1200 of 10000 results (21ms)
2021-08-09 §
05:56 <XioNoX> enable cloudsw1-c8 interfaces toward cloudsw2-c8 - T277340 [production]
05:23 <marostegui> Lag in s4 (commonswiki) will appear on clouddb* hosts (wiki replicas) T288273 [production]
05:22 <marostegui> Optimize commonswiki.image on eqiad, lag will appear - T288273 [production]
2021-08-08 §
22:08 <wm-bot> <lucaswerkmeister> END - optimize querytime table [tools.quickcategories]
22:08 <wm-bot> <lucaswerkmeister> START - optimize querytime table [tools.quickcategories]
22:02 <wm-bot> <lucaswerkmeister> END - purge querytime rows older than 30 days, in batches of 1000 sleeping for 1s between batches (deleted 10257486 rows) [tools.quickcategories]
13:16 <wm-bot> <lucaswerkmeister> START - purge querytime rows older than 30 days, in batches of 1000 sleeping for 1s between batches [tools.quickcategories]
07:39 <majavah> stop running and queued GlobalRollbacker-unreviewer grid jobs per private tool maintainer request [tools.dannys712-bot]
2021-08-07 §
20:13 <wm-bot> <lucaswerkmeister> deployed 7028c292e7 (lock less tables) [tools.quickcategories]
19:06 <wm-bot> <lucaswerkmeister> deployed bbdebedd08 (styling improvement) [tools.quickcategories]
05:59 <majavah> restart nginx on toolserver-proxy-01 if that helps with flapping icinga certificate expiry check [tools]
05:30 <Operator873|CVN> restarted CVNBot5 - was not reporting [cvn]
00:48 <James_F> Docker: Publish quibble-buster-php73-coverage 1.1.1 for T287918. [releng]
00:29 <James_F> Zuul: Add skin-coverage jobs to all Wikimedia production skins T287918 [releng]
00:27 <James_F> Zuul: Provide a skin-coverage template T287918 [releng]
2021-08-06 §
23:53 <James_F> Docker: Publishing quibble-buster-php73-coverage 1.1.0 for T287918 [releng]
23:47 <James_F> Zuul: [mediawiki/skins/Mirage] Not a production skin; move to right section [releng]
19:17 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
19:12 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
19:04 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:53 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:53 <cmjohnson@cumin1001> END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) [production]
18:52 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:45 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:41 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
18:40 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:36 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
17:39 <brennen> gitlab: run ansible to apply [[gerrit:710529|remove backup warning for config backups]] (T288324) [production]
16:59 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes; selector: name=maps2005.codfw.wmnet [production]
16:56 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
16:50 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
16:38 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts peek2001.codfw.wmnet [production]
16:34 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
16:34 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on maps1005.eqiad.wmnet with reason: Awaiting reimaging, depooled. [production]
16:34 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on maps1005.eqiad.wmnet with reason: Awaiting reimaging, depooled. [production]
16:30 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
16:30 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts peek2001.codfw.wmnet [production]
16:29 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8 days, 4:00:00 on peek2001.codfw.wmnet with reason: decom [production]
16:29 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 8 days, 4:00:00 on peek2001.codfw.wmnet with reason: decom [production]
16:16 <bstorm> failed over to tools-docker-registry-06 (which has more space) T288229 [tools]
16:09 <wikibugs> Updated channels.yaml to: b3053ae5642948f4f9828f2db03df5ae884e2617 Add some extra Wikisource projects to #wikisource [tools.wikibugs]
16:03 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
16:02 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
15:14 <hnowlan> removing maps1005 from old maps cassandra cluster before reimaging [production]
14:35 <hnowlan@puppetmaster1001> conftool action : set/pooled=no; selector: name=maps1005.eqiad.wmnet [production]
14:29 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on maps2005.codfw.wmnet with reason: Reimaging [production]
14:29 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on maps2005.codfw.wmnet with reason: Reimaging [production]
14:26 <hnowlan@cumin2002> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) for 2:00:00 on maps2005.codfw.wmnet with reason: REIMAGE [production]
14:24 <hnowlan@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on maps2005.codfw.wmnet with reason: REIMAGE [production]
13:35 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes; selector: name=maps1006.eqiad.wmnet [production]