2701-2750 of 10000 results (50ms)
2019-01-27 §
19:57 <addshore> bringing integration-slave-docker-1034 back online [releng]
19:50 <addshore> addshore@integration-slave-docker-1034:~$ sudo docker image prune -a --force --filter "until=2191h" // (3 months?) Total reclaimed space: 17.12GB [releng]
16:22 <godog> powercycle ms-be1020 - T214778 [production]
13:02 <gtirloni> killed catbot.py due to excessive CPU usage (please don't run non-interactive scripts on the bastions -- https://wikitech.wikimedia.org/wiki/Help:Toolforge#Rules_of_use) [tools.wdml]
03:28 <marostegui> Fix x1 on dbstore1002 - T213670 [production]
02:24 <jforrester@deploy1001> Synchronized php-1.33.0-wmf.14/extensions/WikibaseMediaInfo/src/WikibaseMediaInfoHooks.php: Hot-deploy Ic2b08cb27 in WBMI to fix Commons File page display (duration: 00m 49s) [production]
2019-01-26 §
22:48 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/486791 [releng]
21:21 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/486737 [releng]
11:06 <volans> force rebooting icinga1001 (no ping, no ssh, stuck console) [production]
03:23 <marostegui> Convert all tables on incubatorwiki to innodb to fix s3 thread - T213670 [production]
02:55 <Joan> CVNBot18 restarted (Last message was received on RCReader 5385.431178 seconds ago) [cvn]
00:03 <XioNoX> split member-range ge-3/0/0 to ge-3/0/38 on asw-b-codfw [production]
2019-01-25 §
22:45 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) (duration: 03m 27s) [production]
22:42 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@5e859c4]: Update mobileapps to a8834e8 (T214728) [production]
21:56 <krinkle@deploy1001> Synchronized wmf-config/flaggedrevs.php: I95c37d628557c (duration: 00m 46s) [production]
21:44 <krinkle@deploy1001> Synchronized wmf-config/: Idb695dd033d42 (duration: 00m 46s) [production]
21:43 <krinkle@deploy1001> Synchronized wmf-config/PhpAutoPrepend.php: Idb695dd033d42 (duration: 00m 47s) [production]
21:05 <robh> cleared sel on db1068, it had a power redundancy loss event (old and resolved) that was triggering the icinga check [production]
20:50 <bd808> Deployed new tcl/web Kubernetes image based on Debian Stretch (T214668) [tools]
20:11 <gtirloni> deleted project yandex-proxy T212306 [admin]
20:11 <gtirloni> deleted project T212306 [admin]
20:04 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Pool db1106 as an extra api host (duration: 00m 46s) [production]
19:36 <jynus> powercycle db1114 T214720 [production]
19:25 <thcipriani> reloading zuul to deploy https://gerrit.wikimedia.org/r/486503/ [releng]
19:21 <jynus> disabling notifications on db1114 [production]
19:21 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1114 (duration: 00m 46s) [production]
19:07 <thcipriani> reloading zuul to deploy https://gerrit.wikimedia.org/r/486501/ [releng]
18:32 <bsitzmann@deploy1001> Finished deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) (duration: 03m 33s) [production]
18:28 <bsitzmann@deploy1001> Started deploy [mobileapps/deploy@94b76f5]: Update mobileapps to 4c42e3d (T214714) [production]
17:51 <Joan> Restarted CVNBot18 (Last message was received on RCReader 4033.703116 seconds ago) [cvn]
17:17 <chaomodus> notebook1003 restarted nagios-nrpe-server due to oom - T212824 [production]
14:56 <hashar> contint1001: systemctl stop zuul-merger && find /srv/zuul/git -name .git -type d -print -execdir git gc --prune=now \; [releng]
14:43 <hashar> contint1001: stopping zuul-merger for cleanup duties [production]
14:22 <andrewbogott> draining and moving tools-worker-1016 to a new labvirt for T214447 [tools]
14:22 <andrewbogott> draining and moving tools-worker-1021 to a new labvirt for T214447 [tools]
13:35 <hashar> flake8 broken under python2.7 due to configparser==3.5.2 https://github.com/jaraco/configparser/issues/27 https://github.com/jaraco/configparser/issues/27 [releng]
09:48 <marostegui> Add dbstore1005:3318 to tendril T210478 [production]
08:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1105 (duration: 00m 45s) [production]
08:00 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give more traffic to db1105:3312 (duration: 00m 45s) [production]
07:51 <elukey> restart yarn/hdfs daemons on analytics1056 to pick up new disk settings - T214057 [production]
07:40 <elukey> drain + reboot analytics1054 after disk swap (verify reboot + restore correct fstab mountpoints) - T213038 [production]
07:30 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105:3312 (duration: 00m 45s) [production]
07:21 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1105 (duration: 00m 47s) [production]
06:53 <marostegui> Stop MySQL on db1105 to upgrade MySQL [production]
06:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully depool db1105 (duration: 00m 46s) [production]
06:49 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1122 T210713 (duration: 00m 47s) [production]
06:13 <marostegui> Deploy schema change on db1122 - T210713 [production]
06:12 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1122 T210713 (duration: 00m 48s) [production]
06:04 <marostegui> Compress dbstore1002: staging.mep_word_persistence from Aria to InnoDB - T213706 [production]
05:42 <kartik@deploy1001> Finished deploy [cxserver/deploy@a5d7181]: Update cxserver to 356f0a1 (T213257, T213275) (duration: 04m 09s) [production]