1601-1650 of 10000 results (51ms)
2018-03-19 §
09:10 <godog> depool codfw puppetmaster - T184562 [production]
09:08 <marostegui> Stop MySQL on es1016 for kernel, mariadb and socket location upgrade [production]
09:07 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool es1016 for kernel, mariadb and socket location upgrade (duration: 00m 58s) [production]
08:57 <moritzm> installing openjdk-8 security updates [production]
08:41 <elukey> reboot thorium for kernel security upgrades (hosts all analytics websites, they will go down temporary) [production]
08:26 <moritzm> installing libvorbis security updates [production]
08:22 <elukey> revert previous state on aqs1004, the new pkg might need some more work - T189529 [production]
08:19 <marostegui> Reset slave on db1106 to get it ready for s1 - https://phabricator.wikimedia.org/T183469 [production]
08:11 <marostegui> Reboot db1106 for kernel upgrade [production]
07:58 <elukey> manually installed cassandra-2.2.6-wmf3 on aqs1004 - T189529 [production]
07:55 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1065 - T183469 (duration: 00m 57s) [production]
07:47 <elukey> drain cassandra instances and reboot aqs1004 for kernel upgrades [production]
07:44 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Move db1106 from s5 to s1 - T183469 (duration: 01m 00s) [production]
07:27 <marostegui> Reload dbproxy1002 and dbproxy1007 to get the new config - T189773 [production]
06:20 <marostegui> Deploy schema change on db1091 - T187089 T185128 T153182 [production]
06:13 <marostegui> Stop MySQL on db1091 for kernel and mariadb upgrade [production]
06:13 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1091 for schema change, kernel upgrade and mariadb upgrade (duration: 00m 58s) [production]
02:39 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.25) (duration: 10m 54s) [production]
2018-03-17 §
18:41 <elukey> executed apt-get clean on scb1004 to free some space (root partition disk space warning) [production]
03:09 <krinkle@tin> Synchronized docroot/noc/db.php: noc: I410a56431a (duration: 00m 59s) [production]
00:13 <mutante> running puppet on all cache::misc to rename director bromine to webserver_misc_static (T188163) [production]
2018-03-16 §
23:32 <mutante> signing puppet cert for vega.codfw.wmnet, initial puppet run after fresh stretch install (T188163) [production]
18:43 <mutante> creating new ganeti VM vega.codfw.wmnet to be equivalent of bromine, 1G RAM, 30G disk, 1vCPU (T189899) [production]
18:13 <jynus> switching back wikireplica cloud dns to the original config [production]
17:32 <jynus> reimage dbproxy1010 [production]
16:29 <jynus> updating wikireplica_dns 2/3 [production]
16:22 <moritzm> installing curl security updates [production]
16:09 <marostegui> Stop MySQL on db1020 - T189773 [production]
14:48 <andrewbogott> reset contintcloud quotas as per https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS/Admin/Troubleshooting#incorrect_quota_violations [production]
14:48 <jynus> reimage dbproxy1011 [production]
14:27 <andrewbogott> restarting nodepool on nodepool1001 [production]
14:25 <elukey> reboot druid1002 for kernel updates [production]
14:14 <andrewbogott> restarting rabbitmq on labcontrol1001 [production]
13:57 <andrewbogott> stopping nodepool temporarily during changes to nova.conf [production]
13:41 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2050 (duration: 00m 58s) [production]
13:15 <chasemp> disable puppet across cloud things for safe rollout [production]
12:52 <moritzm> uploaded libsodium23/php-acpu/php-mailparse to thirdparty/php72 (deps/extentions needed by Phabricator) [production]
12:51 <ema> text-esams: reboot for kernel upgrades T188092 and to mitigate https://grafana.wikimedia.org/dashboard/db/varnish-failed-fetches?panelId=7&fullscreen&orgId=1&from=1518746284946&to=1521204628041 [production]
12:12 <marostegui> Reboot dbproxy1005 for kernel upgrade [production]
12:02 <marostegui> Run pt-table-checksum on m2 [production]
12:00 <marostegui> Run pt-table-checksum on m5 [production]
11:11 <hashar> zuul: reenqueue all coverage jobs lost when restarting Zuul [production]
10:53 <hashar> Upgrading zuul to zuul_2.5.1-wmf4 to resolve a mutex deadlock T189859 [production]
10:45 <jynus> disable puppet and load balance between 3 wikirreplicas on dbproxy1010 [production]
10:19 <jynus> upgrade and restart of dbproxy1009 (passive) [production]
10:01 <elukey> restart eventlogging_sync on db1108 (eventlogging db slave) as precautions after the change of m4-master.eqiad.wmnet's CNAME [production]
10:00 <moritzm> reverting the HHVM/ICU 57 setup on mwdebug2001 which was used for the dry run tests [production]
09:57 <elukey> restart eventlogging-consumer@mysql-eventbus on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]
09:56 <hashar> Zuul coverage pipeline is deadlocked on an unreleased mutex. Will need a new Zuul version. [production]
09:51 <elukey> restart eventlogging-consumer@mysql-m4 on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]