51-100 of 10000 results (25ms)
2018-04-30 §
13:30 <zfilipin@tin> Synchronized php-1.32.0-wmf.1/extensions/AbuseFilter: SWAT: [[gerrit:429570|Dont use an empty string for block parameters (T189681)]] (duration: 01m 02s) [production]
13:30 <marostegui> Poweroff db1098 for HW maintenance - T193331 [production]
13:26 <marostegui> Stop MySQL on db1098 - T193331 [production]
13:21 <ottomata> beginning rolling reimage of kafka200[23] to stretch T192832 [production]
13:18 <zfilipin@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:429442|Enable RCPatrol in cswiki (T193242)]] (duration: 00m 59s) [production]
13:16 <marostegui> Drop unusued _old tables from a few wikis - https://phabricator.wikimedia.org/T54932#4167221 [production]
13:12 <gehel> restarting elasticsearch codfw rolling restart for plugin update and NUMA config - T191543 / T191236 [production]
13:11 <elukey> reimage analytics1049 and 1050 to Debian Stretch [production]
13:09 <zfilipin@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:428854|Disable Datetime Selector on Special:Block on all wikis except Meta, MediaWiki, and German Wikipedia (T192962)]] (duration: 01m 00s) [production]
12:48 <arturo> aborrero@labtestnet2001:~ $ sudo rm /var/log/upstart/nova-api.log.1 <--- disk full, logrotate refuses to work bc that [production]
10:34 <vgutierrez> Updating puppet compiler facts [production]
10:30 <vgutierrez> Repool (Re-enable BGP) lvs3001 - T191897 [production]
10:06 <elukey> restart hdfs namenode on analytics1002 to pick up new heap settings (last step of the maintenance) [production]
10:00 <elukey> set analytics1001 as active HDFS Namenode using manual failover [production]
09:50 <elukey> restart HDFS Namenode on analtics1001 (current standby) again with Xmx/Xms set to 8g [production]
09:47 <elukey> restart HDFS Namenode on analtics1001 (current standby) [production]
09:32 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1060, fully pool db1090 (duration: 00m 59s) [production]
09:15 <ariel@tin> Finished deploy [dumps/dumps@a6baf69]: do not update existing rss feed file if the dump job it covers is more recent than the one for which a feed is requested (duration: 00m 04s) [production]
09:15 <ariel@tin> Started deploy [dumps/dumps@a6baf69]: do not update existing rss feed file if the dump job it covers is more recent than the one for which a feed is requested [production]
09:03 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1090 (duration: 00m 59s) [production]
09:01 <vgutierrez> Depool and reimage lvs3001 as stretch - T191897 [production]
08:39 <marostegui> Deploy schema change on db1076 - T191519 T188299 T190148 [production]
08:39 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1076 for alter table (duration: 00m 59s) [production]
08:38 <elukey> restart HDFS namenode on analytics1001 (standby master) to pick up new JVM settings - T193257 [production]
08:33 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1074 after alter table (duration: 01m 00s) [production]
08:23 <godog> swift eqiad-prod more weight to ms-be104[0-3] - T191896 [production]
08:16 <elukey> force a manual failover of the HDFS Namenode from analytics1001 to analytics1002 to test new GC Settings - T193257 [production]
08:15 <vgutierrez> Repool (Re-enable BGP) in lvs3002 - T191897 [production]
08:02 <jynus> stopping replication on both db1090 db instances to finish maintenance [production]
07:33 <jynus> restarting dbstore1001@s1 to apply config change [production]
07:31 <elukey> restart HDFS namenode on analytics1002 (standby master) to pick up new JVM settings - T193257 [production]
07:06 <marostegui> Restart replication on db1095:s3 [production]
07:05 <marostegui> Temporary stop replication on db1095:s3 [production]
06:48 <vgutierrez> Depool and reimage lvs3002 - T191897 [production]
06:11 <marostegui> Drop table edit_page_tracking from s3 - T57385 [production]
06:04 <marostegui> Drop table edit_page_tracking from s2 - T57385 [production]
05:59 <marostegui> Drop table edit_page_tracking from s1 - T57385 [production]
05:50 <marostegui> Drop table edit_page_tracking from s4, s5 and s7 - T57385 [production]
05:47 <marostegui> Drop table edit_page_tracking from s6 - T57385 [production]
05:28 <marostegui> Deploy schema change on db1074 - T191519 T188299 T190148 [production]
05:28 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1074 for alter table (duration: 01m 09s) [production]
02:57 <l10nupdate@tin> scap sync-l10n completed (1.32.0-wmf.1) (duration: 08m 18s) [production]
2018-04-29 §
17:46 <brion> rebuilding image metadata for PDFs on commons on terbium [production]
2018-04-28 §
23:42 <volans@tin> Synchronized wmf-config/db-eqiad.php: Depool db1098 (crashed) (duration: 01m 01s) [production]
15:49 <jynus@tin> Synchronized wmf-config/db-codfw.php: Depool db2081, crashed (duration: 01m 00s) [production]
05:19 <apergos> reimaged snapshot1005 to stretch [production]
2018-04-27 §
22:45 <mutante> m2171,mw2172,mw2173 ff. - reinstalling with stretch and raid1-LVM [production]
22:07 <hashar> Running quibble-vendor-mysql-php70-docker against ~ 900 MediaWiki extensions. Triggered with a custom gear-client.py script from contint1001. PID 29710 [production]
19:58 <tgr> T193254 ran fixStuckGlobalRename.php for: Aliya klein Hasselb Husseinzadeh02 Jswf845 Lorraine Fgr Mikeypugs0134 Ncanty STEEEPGlobal Sunlight me THOR Global Defense Group TPBox Zenas Gao אֲבִי גְדוֹר ぽっぽ大将軍 [production]
18:16 <mutante> mw2167,mw2168,mw2169 - reinstalling with stretch and raid1-lvm [production]