1-50 of 10000 results (44ms)
2019-01-28 §
23:51 <vgutierrez> restarting cp2014 - T214872 [production]
21:02 <Zoranzoki21> Done wikitext export of content of database for education program on srwiki - T174802 (duration: 8 minutes) [production]
20:54 <Zoranzoki21> Starting wikitext export of content of database for education program on srwiki - T174802 (21:54 UTC+1) [production]
19:55 <brion> running final pass of requeueTranscodes.php on all wikis to make sure stray missing VP9 transcodes are cleaned up [production]
16:41 <hashar> contint1001: cleaning up disk space on / (docker images) [production]
16:36 <jynus> remove backups dir at dbstore2001 T214831 [production]
15:22 <thcipriani> restarting jenkins for update [production]
14:16 <jynus> stop, upgrade and reboot db2048, this will cause general lag/read only on enwiki/s1-codfw for some minutes [production]
13:52 <jynus> stop, upgrade and reboot db2092 [production]
12:55 <jynus> stop, upgrade and reboot db2085 [production]
12:45 <jynus> powercycle ms-be1034 [production]
12:42 <onimisionipe> restarting all elatsicsearch instances on relforge1002 to test spicerack command [production]
11:21 <jynus> stop, upgrade and reboot db2062 [production]
10:45 <jynus> stop, upgrade and reboot db2055 [production]
2019-01-27 §
16:22 <godog> powercycle ms-be1020 - T214778 [production]
03:28 <marostegui> Fix x1 on dbstore1002 - T213670 [production]
02:24 <jforrester@deploy1001> Synchronized php-1.33.0-wmf.14/extensions/WikibaseMediaInfo/src/WikibaseMediaInfoHooks.php: Hot-deploy Ic2b08cb27 in WBMI to fix Commons File page display (duration: 00m 49s) [production]
2019-01-26 §
11:06 <volans> force rebooting icinga1001 (no ping, no ssh, stuck console) [production]
03:23 <marostegui> Convert all tables on incubatorwiki to innodb to fix s3 thread - T213670 [production]
00:03 <XioNoX> split member-range ge-3/0/0 to ge-3/0/38 on asw-b-codfw [production]
2019-01-25 §
22:45 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) (duration: 03m 27s) [production]
22:42 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) [production]
21:56 <krinkle@deploy1001> Synchronized wmf-config/flaggedrevs.php: I95c37d628557c (duration: 00m 46s) [production]
21:44 <krinkle@deploy1001> Synchronized wmf-config/: Idb695dd033d42 (duration: 00m 46s) [production]
21:43 <krinkle@deploy1001> Synchronized wmf-config/PhpAutoPrepend.php: Idb695dd033d42 (duration: 00m 47s) [production]
21:05 <robh> cleared sel on db1068, it had a power redundancy loss event (old and resolved) that was triggering the icinga check [production]
20:04 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Pool db1106 as an extra api host (duration: 00m 46s) [production]
19:36 <jynus> powercycle db1114 T214720 [production]
19:21 <jynus> disabling notifications on db1114 [production]
19:21 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1114 (duration: 00m 46s) [production]
18:32 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) (duration: 03m 33s) [production]
18:28 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) [production]
17:17 <chaomodus> notebook1003 restarted nagios-nrpe-server due to oom - T212824 [production]
14:43 <hashar> contint1001: stopping zuul-merger for cleanup duties [production]
09:48 <marostegui> Add dbstore1005:3318 to tendril T210478 [production]
08:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1105 (duration: 00m 45s) [production]
08:00 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give more traffic to db1105:3312 (duration: 00m 45s) [production]
07:51 <elukey> restart yarn/hdfs daemons on analytics1056 to pick up new disk settings - T214057 [production]
07:40 <elukey> drain + reboot analytics1054 after disk swap (verify reboot + restore correct fstab mountpoints) - T213038 [production]
07:30 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105:3312 (duration: 00m 45s) [production]
07:21 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105 (duration: 00m 47s) [production]
06:53 <marostegui> Stop MySQL on db1105 to upgrade MySQL [production]
06:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully depool db1105 (duration: 00m 46s) [production]
06:49 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1122 T210713 (duration: 00m 47s) [production]
06:13 <marostegui> Deploy schema change on db1122 - T210713 [production]
06:12 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1122 T210713 (duration: 00m 48s) [production]
06:04 <marostegui> Compress dbstore1002: staging.mep_word_persistence from Aria to InnoDB - T213706 [production]
05:42 <kartik@deploy1001> Finished deploy [cxserver/deploy@a5d7181]: Update cxserver to 356f0a1 (T213257, T213275) (duration: 04m 09s) [production]
05:38 <kartik@deploy1001> Started deploy [cxserver/deploy@a5d7181]: Update cxserver to 356f0a1 (T213257, T213275) [production]
03:12 <mutante> scandium sudo chgrp -R wikidev /srv/deployment/parsoid/deploy/ ; sudo chmod -R g+w /srv/deployment/parsoid/deploy/ (T201366) [production]