1-50 of 10000 results (30ms)
2019-01-29 §
21:52 <jijiki> Depooling thumbor2002 due to disc failure - T214813 [production]
16:51 <arturo> T214499 update Netbox status for cloudvirt1023/1024/1025/1026/1027 from PLANNED to ACTIVE. These servers are actually providing services already. [production]
10:05 <jynus> stop, upgrade and restart db2065 [production]
09:28 <jynus> stop, upgrade and restart db2058 [production]
09:12 <jynus> stopping, upgrading and restarting db2035, this will cause lag on codfw-s2 [production]
08:58 <jynus> stop, upgrade and restart db2041 [production]
08:38 <jynus> stop, upgrade and restart db2056 [production]
08:17 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1114 after crash (duration: 00m 52s) [production]
03:32 <XioNoX> bump cr2-esams-cr2-eqiad ospf cost to 2000 for level3 link flapping [production]
2019-01-28 §
23:51 <vgutierrez> restarting cp2014 - T214872 [production]
21:02 <Zoranzoki21> Done wikitext export of content of database for education program on srwiki - T174802 (duration: 8 minutes) [production]
20:54 <Zoranzoki21> Starting wikitext export of content of database for education program on srwiki - T174802 (21:54 UTC+1) [production]
19:55 <brion> running final pass of requeueTranscodes.php on all wikis to make sure stray missing VP9 transcodes are cleaned up [production]
16:41 <hashar> contint1001: cleaning up disk space on / (docker images) [production]
16:36 <jynus> remove backups dir at dbstore2001 T214831 [production]
15:22 <thcipriani> restarting jenkins for update [production]
14:16 <jynus> stop, upgrade and reboot db2048, this will cause general lag/read only on enwiki/s1-codfw for some minutes [production]
13:52 <jynus> stop, upgrade and reboot db2092 [production]
12:55 <jynus> stop, upgrade and reboot db2085 [production]
12:45 <jynus> powercycle ms-be1034 [production]
12:42 <onimisionipe> restarting all elatsicsearch instances on relforge1002 to test spicerack command [production]
11:21 <jynus> stop, upgrade and reboot db2062 [production]
10:45 <jynus> stop, upgrade and reboot db2055 [production]
2019-01-27 §
16:22 <godog> powercycle ms-be1020 - T214778 [production]
03:28 <marostegui> Fix x1 on dbstore1002 - T213670 [production]
02:24 <jforrester@deploy1001> Synchronized php-1.33.0-wmf.14/extensions/WikibaseMediaInfo/src/WikibaseMediaInfoHooks.php: Hot-deploy Ic2b08cb27 in WBMI to fix Commons File page display (duration: 00m 49s) [production]
2019-01-26 §
11:06 <volans> force rebooting icinga1001 (no ping, no ssh, stuck console) [production]
03:23 <marostegui> Convert all tables on incubatorwiki to innodb to fix s3 thread - T213670 [production]
00:03 <XioNoX> split member-range ge-3/0/0 to ge-3/0/38 on asw-b-codfw [production]
2019-01-25 §
22:45 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) (duration: 03m 27s) [production]
22:42 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) [production]
21:56 <krinkle@deploy1001> Synchronized wmf-config/flaggedrevs.php: I95c37d628557c (duration: 00m 46s) [production]
21:44 <krinkle@deploy1001> Synchronized wmf-config/: Idb695dd033d42 (duration: 00m 46s) [production]
21:43 <krinkle@deploy1001> Synchronized wmf-config/PhpAutoPrepend.php: Idb695dd033d42 (duration: 00m 47s) [production]
21:05 <robh> cleared sel on db1068, it had a power redundancy loss event (old and resolved) that was triggering the icinga check [production]
20:04 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Pool db1106 as an extra api host (duration: 00m 46s) [production]
19:36 <jynus> powercycle db1114 T214720 [production]
19:21 <jynus> disabling notifications on db1114 [production]
19:21 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1114 (duration: 00m 46s) [production]
18:32 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) (duration: 03m 33s) [production]
18:28 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) [production]
17:17 <chaomodus> notebook1003 restarted nagios-nrpe-server due to oom - T212824 [production]
14:43 <hashar> contint1001: stopping zuul-merger for cleanup duties [production]
09:48 <marostegui> Add dbstore1005:3318 to tendril T210478 [production]
08:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1105 (duration: 00m 45s) [production]
08:00 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give more traffic to db1105:3312 (duration: 00m 45s) [production]
07:51 <elukey> restart yarn/hdfs daemons on analytics1056 to pick up new disk settings - T214057 [production]
07:40 <elukey> drain + reboot analytics1054 after disk swap (verify reboot + restore correct fstab mountpoints) - T213038 [production]
07:30 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105:3312 (duration: 00m 45s) [production]
07:21 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105 (duration: 00m 47s) [production]