4101-4150 of 10000 results (54ms)
2018-03-19 §
07:27 <marostegui> Reload dbproxy1002 and dbproxy1007 to get the new config - T189773 [production]
06:20 <marostegui> Deploy schema change on db1091 - T187089 T185128 T153182 [production]
06:13 <marostegui> Stop MySQL on db1091 for kernel and mariadb upgrade [production]
06:13 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1091 for schema change, kernel upgrade and mariadb upgrade (duration: 00m 58s) [production]
02:39 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.25) (duration: 10m 54s) [production]
2018-03-17 §
18:41 <elukey> executed apt-get clean on scb1004 to free some space (root partition disk space warning) [production]
03:09 <krinkle@tin> Synchronized docroot/noc/db.php: noc: I410a56431a (duration: 00m 59s) [production]
00:13 <mutante> running puppet on all cache::misc to rename director bromine to webserver_misc_static (T188163) [production]
2018-03-16 §
23:32 <mutante> signing puppet cert for vega.codfw.wmnet, initial puppet run after fresh stretch install (T188163) [production]
18:43 <mutante> creating new ganeti VM vega.codfw.wmnet to be equivalent of bromine, 1G RAM, 30G disk, 1vCPU (T189899) [production]
18:13 <jynus> switching back wikireplica cloud dns to the original config [production]
17:32 <jynus> reimage dbproxy1010 [production]
16:29 <jynus> updating wikireplica_dns 2/3 [production]
16:22 <moritzm> installing curl security updates [production]
16:09 <marostegui> Stop MySQL on db1020 - T189773 [production]
14:48 <andrewbogott> reset contintcloud quotas as per https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS/Admin/Troubleshooting#incorrect_quota_violations [production]
14:48 <jynus> reimage dbproxy1011 [production]
14:27 <andrewbogott> restarting nodepool on nodepool1001 [production]
14:25 <elukey> reboot druid1002 for kernel updates [production]
14:14 <andrewbogott> restarting rabbitmq on labcontrol1001 [production]
13:57 <andrewbogott> stopping nodepool temporarily during changes to nova.conf [production]
13:41 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2050 (duration: 00m 58s) [production]
13:15 <chasemp> disable puppet across cloud things for safe rollout [production]
12:52 <moritzm> uploaded libsodium23/php-acpu/php-mailparse to thirdparty/php72 (deps/extentions needed by Phabricator) [production]
12:51 <ema> text-esams: reboot for kernel upgrades T188092 and to mitigate https://grafana.wikimedia.org/dashboard/db/varnish-failed-fetches?panelId=7&fullscreen&orgId=1&from=1518746284946&to=1521204628041 [production]
12:12 <marostegui> Reboot dbproxy1005 for kernel upgrade [production]
12:02 <marostegui> Run pt-table-checksum on m2 [production]
12:00 <marostegui> Run pt-table-checksum on m5 [production]
11:11 <hashar> zuul: reenqueue all coverage jobs lost when restarting Zuul [production]
10:53 <hashar> Upgrading zuul to zuul_2.5.1-wmf4 to resolve a mutex deadlock T189859 [production]
10:45 <jynus> disable puppet and load balance between 3 wikirreplicas on dbproxy1010 [production]
10:19 <jynus> upgrade and restart of dbproxy1009 (passive) [production]
10:01 <elukey> restart eventlogging_sync on db1108 (eventlogging db slave) as precautions after the change of m4-master.eqiad.wmnet's CNAME [production]
10:00 <moritzm> reverting the HHVM/ICU 57 setup on mwdebug2001 which was used for the dry run tests [production]
09:57 <elukey> restart eventlogging-consumer@mysql-eventbus on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]
09:56 <hashar> Zuul coverage pipeline is deadlocked on an unreleased mutex. Will need a new Zuul version. [production]
09:51 <elukey> restart eventlogging-consumer@mysql-m4 on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]
09:31 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Restore original weight for es1015 after kernel, mariadb and socket upgrade (duration: 00m 57s) [production]
09:27 <oblivian@tin> Finished deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 (duration: 00m 29s) [production]
09:26 <oblivian@tin> Started deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 [production]
09:17 <oblivian@tin> (no justification provided) [production]
09:17 <oblivian@tin> Finished deploy [netbox/deploy@f3e0159]: Re-deploying with the newly built artifacts (duration: 00m 47s) [production]
09:16 <oblivian@tin> Started deploy [netbox/deploy@f3e0159]: Re-deploying with the newly built artifacts [production]
09:15 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 - T183469 (duration: 00m 57s) [production]
08:58 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly repool es1015 after kernel, mariadb and socket upgrade (duration: 00m 56s) [production]
08:49 <jynus> upgrade and restart of dbproxy1004 (passive) [production]
08:41 <marostegui> Stop MySQL on es1015 for maintenance [production]
08:40 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool es1015 for kernel, mariadb and socket upgrade (duration: 00m 58s) [production]
08:40 <elukey> reboot druid1006 for kernel updates [production]
08:29 <elukey> reboot druid1005 for kernel updates [production]