801-850 of 10000 results (41ms)
2018-10-08 §
11:39 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix --add-prefix=T202769 # T202769 [production]
11:35 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix # Finished, still 111 pages to fix [production]
11:34 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary --fix # Started [production]
11:33 <addshore> addshore@mwmaint2001:~$ mwscript namespaceDupes.php --wiki fywiktionary # (dryrun, 11529 links to fix, 11529 were resolvable.) [production]
11:32 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:455249]] Use translated MetaNamespace for fy.wiktionary T202769 (duration: 00m 58s) [production]
11:27 <addshore@deploy1001> Synchronized wmf-config/flaggedrevs.php: SWAT: [[gerrit:464890]] Remove the "reviewer" group at ruwikisource T205997 (duration: 00m 57s) [production]
10:41 <elukey> restart mcrouter on mw2201 with more verbose logging settings as test [production]
10:26 <elukey> swapped db settings from analytics1003 to an-coord1001 on both Druid clusters (restarted coordinators and overlords) [analytics]
09:55 <moritzm> installing python3.5/python2.7 security updates [production]
09:51 <godog> rebuild sdc sdh sdj sdi on ms-be2041 with crc=1 finobt=0 - T199198 [production]
08:20 <marostegui> Disable gtid on es2 and es3 eqiad master [production]
08:20 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2001.codfw.wmnet [production]
08:20 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2002.codfw.wmnet [production]
07:50 <marostegui> Enabling replication eqiad -> codfw in preparation for DC failover [production]
07:40 <marostegui> Disable GTID on s1,s2,s3,s4,s6,s7,s8 eqiad masters in preparation for enabling replication eqiad -> codfw [production]
07:39 <_joe_> disabling puppet, doing etcd tests on lvs1006 [production]
07:38 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2002.eqiad.wmnet [production]
07:38 <gehel@puppetmaster1001> conftool action : set/weight=15; selector: dc=codfw,cluster=wdqs,name=wdqs2001.eqiad.wmnet [production]
07:38 <gehel> reducing relative weight of wdqs2003 in pybal - T206423 [production]
07:35 <joal> Manually run download-project-namespace-map with proxy [analytics]
07:27 <banyek> enabling first time wmf-pt-kill on labsdb1010 [production]
07:20 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1092 with low weight - T205514 (duration: 01m 27s) [production]
07:00 <moritzm> installing git security updates [production]
03:31 <kart_> Update(d) cxserver to 47a864b [releng]
03:02 <legoktm> onlined integration-slave-jessie-1002 [releng]
02:20 <legoktm> legoktm@integration-slave-jessie-1002:/srv/jenkins-workspace/workspace$ sudo rm -rf * [releng]
2018-10-07 §
22:16 <bd808> Got webservice to connect to gateway properly with: webservice stop; rm $HOME/service.manifest; webservice --backend=kubernetes python start [tools.ldap]
22:07 <bd808> Restarted, then stopped and started webservice to attempt to fix gateway timeout errors. Failures continue. Will investigate further [tools.ldap]
21:57 <zhuyifei1999_> restarted maintain-kubeusers on tools-k8s-master-01 T194859 [tools]
21:48 <zhuyifei1999_> maintain-kubeusers on tools-k8s-master-01 seems to be in an infinite loop of 10 seconds. installed python3-dbg [tools]
21:44 <zhuyifei1999_> journal on tools-k8s-master-01 is full of etcd failures, did a puppet run, nothing interesting happens [tools]
16:40 <dereckson> Reset user email for account "Dominic Mayers" (T206421) [production]
16:35 <elukey> run a script in tmux (my username) on mw2201 to poll the status of a mcrouter key/route every 10s using its admin api (very lightweight but kill if needed) [production]
14:52 <onimisionipe> repooling wdqs2003. Catched up on Lag and also Lag issues seems to be creeping on wdqs200[1|2] [production]
04:29 <SMalyshev> temp depooled wdqs2003 [production]
03:12 <ejegg> disabled all fundraising scheduled jobs - something that looks like disk issues on civi1001 [production]
2018-10-06 §
21:20 <gehel> repooling wdqs2003: catched up on updater lag [production]
20:43 <_joe_> restarting apache2 on puppetmaster1001 [production]
19:16 <onimisionipe> depooling wdqs2003 [production]
18:10 <elukey> restart Yarn Resource Manager on an-master1002 to force an-master1001 to take the active role back (failed over due to a zk conn issue) [analytics]
18:09 <elukey> restart Yarn Resource Manager on an-master1002 to force an-master1001 to take the active role back (failed over due to a zk conn issue) [production]
17:07 <onimisionipe> restarting wdqs-blazegraph on wdqs2003 [production]
17:02 <framawiki> qdeled 5794887 too, stuck unblock job [tools.totoazero]
16:50 <framawiki> qdeled 5323359 and 5794089, maj_articles_recents jobs who were stuck since Mon Sep 17 and Thu Sep 27 [tools.totoazero]
16:30 <framawiki> deployed 8550956 on quarry-web-01 [quarry]
13:59 <Reedy> cleared some large folders out of /tmp on deployment-deploy01 [releng]
13:48 <bblack> multatuli: update gdnsd package to 2.99.9930-beta-1+wmf1 [production]
13:47 <bblack> authdns1001: update gdnsd package to 2.99.9930-beta-1+wmf1 (correction to last msg) [production]
13:46 <bblack> authdns1001: update gdnsd package to 2.99.9161-beta-1+wmf1 [production]
12:57 <bblack> rebooting cp1076 [production]