1801-1850 of 10000 results (73ms)
2019-02-18 §
17:49 <jijiki> Reimaging thumbor1004 to stretch - T214597 [production]
17:49 <jijiki> Reimaging thumbor1004 to stretc - T214597 [production]
16:11 <andrewbogott> deleting clouddb-utils-01 VM and associated puppet prefix; we aren't going to run maintain_dbusers here after all [clouddb-services]
15:51 <andrewbogott> removing dns and public IP from clouddb1001 [clouddb-services]
15:41 <jynus> performing es2 & es3 backups into es2002 [production]
15:38 <elukey> kill/spawn deployment-aqs0[2,3] in deployment-prep with Debian Stretch [releng]
15:21 <jynus> move logical backups to subdirectory T210292 [production]
14:29 <moritzm> rebooting mw2167 for kernel tests [production]
14:12 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/490590 [releng]
13:59 <marostegui> Drop ep_* tables from s7 - T174802 [production]
13:47 <arturo> rebooting tools-sgebastion-07 to try fixing general slowness [tools]
13:25 <jijiki> Depooling thumbor1004 to check if the rest of our hosts can handle the load without it - T214597 [production]
12:34 <moritzm> installing brltty bugfix update from stretch point release [production]
12:31 <moritzm> installing upgrading stat1005 to buster [production]
12:28 <XioNoX> update clouddb_return term from cloud-in4 on cr1/2-eqiad - T216353 [production]
12:00 <Krenair> T216067 Stopping mysql on -db04 to begin copy to -db05. Note crashed tables centralauth.globaluser and centralauth.localuser [releng]
11:57 <elukey> kill/spawn deployment-aqs01 with Debian Stretch in deployment-prep [releng]
11:53 <moritzm> installing hdparm bugfix update from stretch point release [production]
11:45 <arturo> manually start deployment-db03 per Krenair request [deployment-prep]
11:45 <arturo> manually start deployment-db03 per Krenair request [releng]
11:36 <moritzm> installing uriparser security updates [production]
11:29 <hasharAway> beta: tried to start instance deployment-db03 172.16.5.23 --> ERROR | T216067 [releng]
11:11 <moritzm> installing c3p0 security updates [production]
10:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1105:3311 T210713 (duration: 00m 46s) [production]
10:54 <jijiki> Reimaging thumbor2002 to stretch - T214597 [production]
10:40 <marostegui> Drop tables ep_* from s2 (cswiki nlwiki ptwiki svwiki) T174802 [production]
09:50 <marostegui> Deploy schema change on db1105:3311 T210713 [production]
09:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1105:3311 T210713 (duration: 00m 46s) [production]
09:46 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1099:3311 T210713 (duration: 00m 46s) [production]
09:28 <marostegui> Drop ep_* from s6 (ruwiki) - T174802 [production]
09:16 <marostegui> Deploy schema change on db1099:3311 - T210713 [production]
09:16 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1099:3311 T210713 (duration: 00m 48s) [production]
09:08 <marostegui> Deploy schema change on dbstore1003:3311 and dbstore1001:3311 - T210713 [production]
08:27 <marostegui> Drop ep_* tables from s5 (srwiki) - T174802 [production]
08:23 <marostegui> Deploy schema change on s1 codfw master (db2048), lag will be generated on s1 codfw - T210713 [production]
07:07 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1119 after mysql upgrade (duration: 00m 46s) [production]
06:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: repool db1119 into API service after mysql upgrade (duration: 00m 46s) [production]
06:49 <marostegui> Reboot db2085 to disable debug mode on kernel T216273 [production]
06:42 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1119 after mysql upgrade (duration: 00m 46s) [production]
06:29 <marostegui> Stop MySQL on db1119 for mysql and kernel upgrade [production]
06:29 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1119 for mysql upgrade (duration: 01m 01s) [production]
05:55 <marostegui> Deploy schema change on s8 primary master (db1071) - T210713 [production]
05:52 <marostegui> Set dbstore1002 on read only to start the migration T210478 T215589 [production]
2019-02-17 §
22:33 <bd808> Migrated from Trusty -> Stretch -> Kubernetes [tools.mysql-php-session-test]
22:23 <zhuyifei1999_> uncordon tools-worker-1010.tools.eqiad.wmflabs [tools]
22:13 <bd808> Migrated from Trusty -> Stretch -> Kubernetes [tools.my-first-flask-tool]
22:11 <zhuyifei1999_> rebooting tools-worker-1010.tools.eqiad.wmflabs [tools]
22:10 <zhuyifei1999_> draining tools-worker-1010.tools.eqiad.wmflabs, `docker ps` is hanging. no idea why. also other weirdness like ContainerCreating forever [tools]
21:43 <bd808> Force deleted pod stuck in Terminating state with ` kubectl delete po/trusty-tools-909545302-jwrz7 --now` [tools.trusty-tools]
21:21 <bstorm_> The slave of labsdb1005.eqiad.wmnet is now clouddb1001.clouddb-services.eqiad.wmflabs [clouddb-services]