5601-5650 of 10000 results (73ms)
2020-04-08 §
11:29 <tgr@deploy1001> Synchronized wmf-config/: SWAT: [[gerrit:584133|Deploy GrowthExperiments on Serbian Wikipedia (T241181)]] (duration: 01m 06s) [production]
11:28 <tgr@deploy1001> Synchronized dblists/: SWAT: [[gerrit:584133|Deploy GrowthExperiments on Serbian Wikipedia (T241181)]] (duration: 01m 17s) [production]
11:05 <XioNoX> push urpf log only to codfw - T244147 [production]
10:39 <jbond42> restarting idp.wikimedia.org [production]
10:14 <marostegui> Deploy schema change on db1078 [production]
10:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1078 for schema change', diff saved to https://phabricator.wikimedia.org/P10940 and previous config saved to /var/cache/conftool/dbconfig/20200408-101431-marostegui.json [production]
09:30 <jynus> stopping and removing db1095:s8 instance [production]
09:20 <godog> upgrade grafana on cloudmetrics hosts - T244208 [production]
09:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Repool db1075 after schema change', diff saved to https://phabricator.wikimedia.org/P10939 and previous config saved to /var/cache/conftool/dbconfig/20200408-091728-marostegui.json [production]
09:11 <gehel> setting weight=10 for all pooled wdqs servers in codfw - T246343 [production]
09:10 <marostegui> Reload proxies on dbproxy1018 and dbproxy1019 to depool labsdb1011 - T249188 T248592 [production]
09:07 <gehel> pooling wdqs200[78] - new servers ready to go! - T246343 [production]
08:46 <marostegui> Rename wb_terms and recreate views on labsdb1009-labsdb1011 - T248592 T248086 [production]
08:39 <godog> upgrade grafana on grafana1002 - T244208 [production]
08:17 <_joe_> switching parsoid to envoy (take 2) in eqiad [production]
07:23 <marostegui> Deploy schema change on db1075 [production]
07:23 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1075 for schema change', diff saved to https://phabricator.wikimedia.org/P10937 and previous config saved to /var/cache/conftool/dbconfig/20200408-072331-marostegui.json [production]
06:31 <marostegui> Deploy schema change on db1095:3313 [production]
06:11 <marostegui> Stop haproxy on dbproxy1011 - T231520 [production]
05:44 <vgutierrez> rolling upgrade ATS to 8.0.6-1wm6 in cp[5006,5012,3065,3064,2042,2041,1090,1089] [production]
05:34 <marostegui> Deploy schema change on dbstore1004:3313 [production]
05:33 <_joe_> repooling wtp1025, with envoy and logging any error above 404 T249535 [production]
04:36 <vgutierrez> rolling restart of ats-tls - T249335 [production]
2020-04-07 §
20:39 <andrewbogott> correction: briefly downtiming ldap-eqiad-replica0 and ldap-eqiad-replica1. I'm trying to investigate a possible split-brain so going to turn ldap off on one, and then the other, to see if behavior changes [production]
20:37 <andrewbogott> briefly downtiming serpens and seaborgium. I'm trying to investigate a possible split-brain so going to turn ldap off on one, and then the other, to see if behavior changes [production]
20:34 <hoo> (Take 3) Temporary modified dumpsgen's crontab on snapshot1008 so that the Wikidata RDF dumps start now (broke as a side effect of T249565) [production]
20:17 <jhuneidi@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.35.0-wmf.27 refs T247774 [production]
20:09 <jhuneidi@deploy1001> Finished scap: testwikis wikis to 1.35.0-wmf.27 (duration: 60m 34s) [production]
20:08 <hoo> (Take 2) Temporary modified dumpsgen's crontab on snapshot1008 so that the Wikidata RDF dumps start now (broke as a side effect of T249565) [production]
19:45 <hoo> Temporary modified dumpsgen's crontab on snapshot1008 so that the Wikidata RDF dumps start now (broke as a side effect of T249565) [production]
19:13 <XioNoX> push pfw firewall rules - T249650 [production]
19:08 <jhuneidi@deploy1001> Started scap: testwikis wikis to 1.35.0-wmf.27 [production]
18:48 <jhuneidi@deploy1001> Pruned MediaWiki: 1.35.0-wmf.24 (duration: 12m 44s) [production]
17:56 <herron> increasing codfw.mediawiki.job.cirrusSearchElasticaWrite to 3 partitions T240702 [production]
17:55 <addshore@deploy1001> Synchronized wmf-config/CommonSettings.php: T249565 T249595 RejectParserCacheValue entries during wb_items_per_site drop incident (14.5/14.5h) retry (duration: 01m 02s) [production]
17:54 <addshore> last sync stuck on sync-masters [production]
17:54 <addshore@deploy1001> sync-file aborted: T249565 T249595 RejectParserCacheValue entries during wb_items_per_site drop incident (14.5/14.5h) (duration: 01m 16s) [production]
17:49 <ppchelko@deploy1001> Started restart [cpjobqueue/deploy@83c93d1]: Try to make it notice new partitions T240702 [production]
17:40 <herron> increasing eqiad.mediawiki.job.cirrusSearchElasticaWrite to 3 partitions T240702 [production]
16:24 <longma> 1.35.0-wmf.27 was branched at e76ac29cd9c57bed4097ec8a4ea8311fb55fd967 for T247774 [production]
16:16 <hashar> restarting CI jenkins [production]
15:53 <hnowlan@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop' for release 'staging' . [production]
15:21 <moritzm> installing idp-test2001 [production]
15:20 <XioNoX> enable uRPF loose mode (log only) on cr4-ulsfo - T244147 [production]
15:17 <addshore@deploy1001> Synchronized wmf-config/CommonSettings.php: T249565 T249595 RejectParserCacheValue entries during wb_items_per_site drop incident (12/14.5h) (duration: 01m 00s) [production]
15:10 <ema> cp3052: stop purged, start vhtcpd T249583 T241232 [production]
15:00 <hnowlan@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop' for release 'staging' . [production]
14:56 <addshore@deploy1001> Synchronized wmf-config/CommonSettings.php: T249565 T249595 RejectParserCacheValue entries during wb_items_per_site drop incident (10/14.5h) (duration: 00m 55s) [production]
14:52 <jeh> cloudvirt2003-dev: downtime in icinga and reboot to enable BIOS virtualization support T249453 [production]
14:38 <ema> cp3052: stop vhtcpd, start purged T249583 [production]