1201-1250 of 10000 results (55ms)
2018-10-08 §
13:41 <elukey> restart navtiming.service on webperf1001 to pick up the dns change for etcd [production]
13:39 <marostegui> Enable gtid on the following slaves: db2068 db1122 db1117:3323 [production]
13:37 <elukey> restart confd on all the other eqiad nodes to pick up new srv records [production]
13:32 <elukey> restart confd on cp1* to pick up new srv records [production]
13:11 <_joe_> purging the dnsrec cache for eqiad,esams etcd client SRV records [production]
13:09 <ema> depool eqiad front-edge traffic T201039 [production]
13:05 <banyek> converting cebwiki.templatelinks to TokuDB on host dbstore1002.eqiad.wmnet (T205544) [production]
13:04 <banyek> downtime notifications for dbstore1002 repliaction threads (T205544) [production]
12:49 <banyek> pt-kill-wmf enabled on the wikireplicas (T203674) [production]
11:59 <_joe_> restart pybal in esams, after running puppet, to switch etcd cluster used [production]
11:46 <_joe_> restart pybal on lvs1001 [production]
11:46 <addshore> SWAT done [production]
11:45 <addshore@deploy1001> Synchronized wmf-config/throttle.php: Add throttle exception for Netherlands Hackathon October 2018 - Wiki Techstorm T206241, and remove other rules. (duration: 00m 56s) [production]
11:39 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix --add-prefix=T202769 # T202769 [production]
11:35 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix # Finished, still 111 pages to fix [production]
11:34 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix # Started [production]
11:33 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary # (dryrun, 11529 links to fix, 11529 were resolvable.) [production]
11:32 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:455249]] Use translated MetaNamespace for fy.wiktionary T202769 (duration: 00m 58s) [production]
11:27 <addshore@deploy1001> Synchronized wmf-config/flaggedrevs.php: SWAT: [[gerrit:464890]] Remove the "reviewer" group at ruwikisource T205997 (duration: 00m 57s) [production]
10:41 <elukey> restart mcrouter on mw2201 with more verbose logging settings as test [production]
09:55 <moritzm> installing python3.5/python2.7 security updates [production]
09:51 <godog> rebuild sdc sdh sdj sdi on ms-be2041 with crc=1 finobt=0 - T199198 [production]
08:20 <marostegui> Disable gtid on es2 and es3 eqiad master [production]
08:20 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2001.codfw.wmnet [production]
08:20 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2002.codfw.wmnet [production]
07:50 <marostegui> Enabling replication eqiad -> codfw in preparation for DC failover [production]
07:40 <marostegui> Disable GTID on s1,s2,s3,s4,s6,s7,s8 eqiad masters in preparation for enabling replication eqiad -> codfw [production]
07:39 <_joe_> disabling puppet, doing etcd tests on lvs1006 [production]
07:38 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2002.eqiad.wmnet [production]
07:38 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2001.eqiad.wmnet [production]
07:38 <gehel> reducing relative weight of wdqs2003 in pybal - T206423 [production]
07:27 <banyek> enabling first time wmf-pt-kill on labsdb1010 [production]
07:20 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1092 with low weight - T205514 (duration: 01m 27s) [production]
07:00 <moritzm> installing git security updates [production]
2018-10-07 §
16:40 <dereckson> Reset user email for account "Dominic Mayers" (T206421) [production]
16:35 <elukey> run a script in tmux (my username) on mw2201 to poll the status of a mcrouter key/route every 10s using its admin api (very lightweight but kill if needed) [production]
14:52 <onimisionipe> repooling wdqs2003. Catched up on Lag and also Lag issues seems to be creeping on wdqs200[1|2] [production]
04:29 <SMalyshev> temp depooled wdqs2003 [production]
03:12 <ejegg> disabled all fundraising scheduled jobs - something that looks like disk issues on civi1001 [production]
2018-10-06 §
21:20 <gehel> repooling wdqs2003: catched up on updater lag [production]
20:43 <_joe_> restarting apache2 on puppetmaster1001 [production]
19:16 <onimisionipe> depooling wdqs2003 [production]
18:09 <elukey> restart Yarn Resource Manager on an-master1002 to force an-master1001 to take the active role back (failed over due to a zk conn issue) [production]
17:07 <onimisionipe> restarting wdqs-blazegraph on wdqs2003 [production]
13:48 <bblack> multatuli: update gdnsd package to 2.99.9930-beta-1+wmf1 [production]
13:47 <bblack> authdns1001: update gdnsd package to 2.99.9930-beta-1+wmf1 (correction to last msg) [production]
13:46 <bblack> authdns1001: update gdnsd package to 2.99.9161-beta-1+wmf1 [production]
12:57 <bblack> rebooting cp1076 [production]
12:49 <bblack> depool cp1076, apparently has disk issues [production]
2018-10-05 §
23:50 <bblack> <<<<<<< repooling eqiad edge caches, a few days ahead of intended switchback next Weds, to alleviate some traffic engineering concerns over the weekend >>>>>> [production]