5051-5100 of 10000 results (74ms)
2018-04-09 §
08:45 <elukey> upgrading eqiad api appservers to ICU 57-enabled HHVM build (T189295) [production]
08:37 <marostegui> Deploy schema change on db1080 - T187089 T185128 T153182 [production]
08:37 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1080 for alter table (duration: 00m 59s) [production]
08:35 <jynus@tin> Synchronized wmf-config/db-codfw.php: Repoo es2019 (duration: 00m 59s) [production]
08:32 <moritzm> upgrading remaining app servers in eqiad to to ICU 57-enabled HHVM build (T189295) [production]
08:32 <_joe_> upgrading eqiad jobrunners to ICU 57-enabled HHVM build (T189295) [production]
08:29 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1106 after alter table (duration: 00m 58s) [production]
07:56 <marostegui> Remove /var/log/wikidata/rebuildTermSqlIndex.log* as per Amir1's request [production]
07:48 <moritzm> upgrading mw1276-1279 (API canaries) to ICU 57-enabled HHVM build (T189295) [production]
07:42 <_joe_> repooling mw1300 now with ICU 57-enabled HHVM build (T189295) [production]
07:38 <_joe_> upgrading mw1300 to ICU 57-enabled HHVM build (T189295) [production]
07:32 <moritzm> upgrading mw1262-1265 to ICU 57-enabled HHVM build (T189295) [production]
07:24 <moritzm> repooling mw1261 after upgrade to ICU 57-enabled HHVM build (T189295) [production]
07:17 <moritzm> upgrading mw1261 to ICU 57-enabled HHVM build (T189295) [production]
07:15 <elukey> upgrade kafka burrow on kafkamon* [analytics]
07:09 <elukey> upgrade burrow to 1.0 on kafkamon[12]* - T188719 [production]
06:57 <Amir1> start of ladsgroup@terbium:~$ mwscript deleteAutoPatrolLogs.php --wiki=zhwiktionary --check-old --before 20180223210426 --sleep 2 (T184485) [production]
06:43 <marostegui> Reboot db2072 for kernel upgrade [production]
06:41 <marostegui> Stop MySQL on db2072 to clone db2092 from it - T170662 [production]
06:38 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2072 - T170662 (duration: 00m 59s) [production]
06:24 <elukey> upgrade burrow 1.0.0 to stretch/jessie wikimedia [production]
06:21 <marostegui> Reboot db2092 for mariadb and kernel upgrade [production]
06:04 <marostegui@tin> Synchronized wmf-config/db-codfw.php: db2079 is now s8 candidate master (duration: 00m 59s) [production]
05:54 <marostegui> Stop MySQL on db2079 to change its binlog format [production]
05:34 <marostegui> Deploy schema change on db1106 with replication enabled (this will generate lag on labs replicas) - T187089 T185128 T153182 [production]
05:34 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 for alter table (duration: 01m 00s) [production]
04:21 <bd808> Update to 5a6a643 [tools.cdnjs]
02:36 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.28) (duration: 05m 57s) [production]
2018-04-08 §
14:06 <Amir1> marking done campaigns inactive in eswikibooks, frwiki, hewiki, rowiki, sqwiki, svwiki [wikilabels]
2018-04-07 §
23:44 <Dereckson> OATHAuth disabled for Wikimedia SUL global account Barek (T191708) [production]
14:06 <Hauskatze> Updated stewardbots to 8c356de/master [tools.stewardbots]
07:28 <legoktm> disabled and cleaned up spam from @Farjksn on Phabricator [production]
00:14 <mutante> bromine - scheduled downtime, reboot for reinstall, upgrade to stretch, misc_static_services switched to codfw (T188163) [production]
2018-04-06 §
22:35 <mutante> rsyncing bugzilla-static raw html from eqiad to codfw VM [production]
22:15 <bd808> Enabled puppet on hafnium [rcm]
21:52 <paladox> re enable puppet on gerrit-test3 [git]
21:50 <eddiegp> beta: Cherry-picking https://gerrit.wikimedia.org/r/c/424707/ , test for T173887 [releng]
21:42 <bd808> Disabled puppet on hafnium to work on mw-vagrant sudoer rules [rcm]
21:27 <paladox> disabling puppet on gerrit-test3 to test avatar stuff for gerrit [git]
19:59 <herron> moved rhodium:/var/lib/git/operations/puppet away and triggered puppet agent run to re-create [production]
19:43 <ottomata> running puppet-merge on rhodium after clash between puppet-merge and new patch submitted [production]
19:23 <demon@tin> Finished scap: Forcing full scap. Mostly no-op, consistency, paranoia, that sort of thing (duration: 11m 51s) [production]
19:13 <bd808> wiki replicas: ran maintain-views --database mediawikiwiki --clean on labsdb10{09,10,11} for T191387 [production]
19:11 <demon@tin> Started scap: Forcing full scap. Mostly no-op, consistency, paranoia, that sort of thing [production]
19:02 <demon@tin> scap aborted: Forcing full scap, removed clean plugin updates (duration: 11m 03s) [production]
19:00 <herron> depooled rhodium (puppet master backend) again https://gerrit.wikimedia.org/r/#/c/424646/ [production]
18:51 <demon@tin> Started scap: Forcing full scap, removed clean plugin updates [production]
18:49 <demon@tin> scap failed: average error rate on 5/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/2cc7028226a539553178454fc2f14459 for details) [production]
18:47 <demon@tin> Pruned MediaWiki: 1.31.0-wmf.26 [keeping static files] (duration: 01m 51s) [production]
17:14 <joal> Launch manual mediawiki-history-reduced job to test memory setting (and index new data) -- mediawiki-history-reduced-wf-2018-03 [analytics]