1551-1600 of 10000 results (61ms)
2018-03-17 §
00:13 <mutante> running puppet on all cache::misc to rename director bromine to webserver_misc_static (T188163) [production]
2018-03-16 §
23:32 <mutante> signing puppet cert for vega.codfw.wmnet, initial puppet run after fresh stretch install (T188163) [production]
22:44 <zhuyifei1999_> suspended process 22825 (BotOrderOfChapters.exe) on tools-bastion-03. Threads continuously going to D-state & R-state. Also sent message via $ write on pts/10 [tools]
20:23 <ottomata> bouncing main -> jumbo mirror makers to apply change-prop topic blacklist [analytics]
18:43 <mutante> creating new ganeti VM vega.codfw.wmnet to be equivalent of bromine, 1G RAM, 30G disk, 1vCPU (T189899) [production]
18:13 <jynus> switching back wikireplica cloud dns to the original config [production]
18:08 <paladox> upgrading gerrit on gerrit-new.wmflabs.org/r/ to latest master version (includes ui tweeks + lots and lots of bug fixes) [git]
17:32 <jynus> reimage dbproxy1010 [production]
16:29 <jynus> updating wikireplica_dns 2/3 [production]
16:22 <moritzm> installing curl security updates [production]
16:09 <marostegui> Stop MySQL on db1020 - T189773 [production]
14:48 <andrewbogott> reset contintcloud quotas as per https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS/Admin/Troubleshooting#incorrect_quota_violations [production]
14:48 <jynus> reimage dbproxy1011 [production]
14:44 <ottomata> restarting eventlogging mysql eventbus consumer to consume from analytics instead of jumbo [analytics]
14:38 <elukey> temporary point pivot to druid1002 as prep step for druid1001's reboot [analytics]
14:37 <elukey> disable druid1001's middlemanager as prep step for reboot [analytics]
14:27 <andrewbogott> restarting nodepool on nodepool1001 [production]
14:25 <elukey> reboot druid1002 for kernel updates [production]
14:24 <elukey> changed superset druid private config from druid1002 to druid1003 [analytics]
14:14 <andrewbogott> restarting rabbitmq on labcontrol1001 [production]
14:13 <Amir1> deleted redis-dispatching-client and redis-dispacthing-repo [wikidata-dev]
13:57 <andrewbogott> stopping nodepool temporarily during changes to nova.conf [production]
13:43 <elukey> disable druid1002's middle manager via API as prep step for reboot [analytics]
13:41 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2050 (duration: 00m 58s) [production]
13:15 <chasemp> disable puppet across cloud things for safe rollout [production]
12:56 <hashar> deployment-tin: ran git remote prune origin / git gc on all /srv/deployment git repositories [releng]
12:56 <hashar> deployment-tin: setting /srv/deployment files/dir to be group owned by wikidev and group writable [releng]
12:52 <moritzm> uploaded libsodium23/php-acpu/php-mailparse to thirdparty/php72 (deps/extentions needed by Phabricator) [production]
12:51 <ema> text-esams: reboot for kernel upgrades T188092 and to mitigate https://grafana.wikimedia.org/dashboard/db/varnish-failed-fetches?panelId=7&fullscreen&orgId=1&from=1518746284946&to=1521204628041 [production]
12:50 <hashar> deployment-tin: deleting /srv/grafana (no more in Gerrit) [releng]
12:46 <hashar> deployment-tin: sudo chown mwdeploy:mwdeploy /srv/mediawiki/.git/objects/pack/* # some pack of 6GB belonged to root [releng]
12:13 <arturo> reboot tools-webgrid-lighttpd-1420 due to almost full /tmp [tools]
12:12 <marostegui> Reboot dbproxy1005 for kernel upgrade [production]
12:02 <marostegui> Run pt-table-checksum on m2 [production]
12:00 <marostegui> Run pt-table-checksum on m5 [production]
11:11 <hashar> zuul: reenqueue all coverage jobs lost when restarting Zuul [releng]
11:11 <hashar> zuul: reenqueue all coverage jobs lost when restarting Zuul [production]
10:53 <hashar> Upgrading zuul to zuul_2.5.1-wmf4 to resolve a mutex deadlock T189859 [releng]
10:53 <hashar> Upgrading zuul to zuul_2.5.1-wmf4 to resolve a mutex deadlock T189859 [production]
10:45 <jynus> disable puppet and load balance between 3 wikirreplicas on dbproxy1010 [production]
10:19 <jynus> upgrade and restart of dbproxy1009 (passive) [production]
10:01 <elukey> restart eventlogging_sync on db1108 (eventlogging db slave) as precautions after the change of m4-master.eqiad.wmnet's CNAME [production]
10:00 <moritzm> reverting the HHVM/ICU 57 setup on mwdebug2001 which was used for the dry run tests [production]
09:57 <elukey> restart eventlogging-consumer@mysql-eventbus on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]
09:57 <elukey> restart eventlogging-consumer@mysql-m4/eventbus on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [analytics]
09:56 <hashar> Zuul coverage pipeline is deadlocked on an unreleased mutex. Will need a new Zuul version. [production]
09:51 <elukey> restart eventlogging-consumer@mysql-m4 on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) [production]
09:31 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Restore original weight for es1015 after kernel, mariadb and socket upgrade (duration: 00m 57s) [production]
09:27 <oblivian@tin> Finished deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 (duration: 00m 29s) [production]
09:26 <oblivian@tin> Started deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 [production]