1751-1800 of 10000 results (35ms)
2018-06-12 ยง
23:00 <bblack> cp3043 - done, reimaged, in live service for cache_upload [production]
22:59 <bblack@neodymium> conftool action : set/pooled=yes; selector: name=cp3043.esams.wmnet [production]
22:37 <tzatziki> (from yesterday) resetting passwords for compromised accounts (T197046) [production]
22:24 <twentyafterfour> phabricator: I scheduled a 24 hour downtime in icinga for the phd service, to give me time to work on this issue. See T196840 [production]
22:23 <twentyafterfour> phabricator: taking phd offline to relieve the load on the m3 database cluster [production]
21:46 <bblack> cp3046 - restart varnish backend for mbox lag [production]
21:40 <bblack@neodymium> conftool action : set/pooled=no; selector: name=cp3043.esams.wmnet [production]
21:39 <bblack> cp3043 - starting process to move to reimage into cache_upload [production]
19:16 <dduvall@deploy1001> rebuilt and synchronized wikiversions files: group0 to 1.32.0-wmf.8 [production]
18:57 <herron> restarted icinga service on einsteinium [production]
18:42 <dduvall@deploy1001> Finished scap: testwiki to php-1.32.0-wmf.8 and rebuild l10n cache (duration: 39m 39s) [production]
18:03 <dduvall@deploy1001> Started scap: testwiki to php-1.32.0-wmf.8 and rebuild l10n cache [production]
17:38 <ariel@deploy1001> Finished deploy [dumps/dumps@038c8b3]: sync after snapshot1009 install (duration: 00m 04s) [production]
17:37 <ariel@deploy1001> Started deploy [dumps/dumps@038c8b3]: sync after snapshot1009 install [production]
17:37 <ariel@deploy1001> Finished deploy [dumps/dumps@038c8b3]: sync after snapshot1009 install (duration: 00m 07s) [production]
17:37 <ariel@deploy1001> Started deploy [dumps/dumps@038c8b3]: sync after snapshot1009 install [production]
16:54 <marxarelli> starting branch cut for 1.32.0-wmf.8 [production]
16:11 <volans@deploy1001> Finished deploy [debmonitor/deploy@0eca14a]: Release v0.1.3 (duration: 00m 22s) [production]
16:11 <volans@deploy1001> Started deploy [debmonitor/deploy@0eca14a]: Release v0.1.3 [production]
15:40 <bblack> cp3034 - nevermind, doing different approach later in the day, still pooled in text for now! [production]
15:29 <bblack> cp3043 switching from text to upload shortly, downtimed in icinga for 2h - https://gerrit.wikimedia.org/r/c/operations/puppet/+/439936 [production]
15:07 <ema> cp3039: restart varnish-backend [production]
14:38 <addshore> file exporter importer slot done [production]
14:38 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: FileImporter/Exporter [[gerrit:439876|Enable FileExporter/Importer on group0 wikis]] T195370 (duration: 00m 51s) [production]
14:20 <addshore@deploy1001> Synchronized wmf-config/CommonSettings.php: FileImporter/Exporter [[gerrit:439875|Allow setting of export target for FileExporter]] T195370 (duration: 00m 50s) [production]
14:09 <addshore@deploy1001> Finished scap: [[gerrit:439900|FileExporter backport]] - Pre deployment backport (extension not yet deployed) (duration: 30m 37s) [production]
13:38 <addshore@deploy1001> Started scap: [[gerrit:439900|FileExporter backport]] - Pre deployment backport (extension not yet deployed) [production]
13:16 <moritzm> installing openjdk-8 security updates on restbase-dev along with cassandra restarts [production]
12:38 <ema> cp3035: restart varnish-be, mbox lag [production]
12:34 <_joe_> repooling mw1230 after reimaging T196881 [production]
12:14 <marostegui> Deploy schema change on db1099:3311 T191316 T192926 T89737 T195193 [production]
12:14 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1099:3311 for alter table (duration: 00m 52s) [production]
12:11 <marostegui> Deploy schema change on dbstore1002:s1 T191316 T192926 T89737 T195193 [production]
12:05 <akosiaris@puppetmaster1001> conftool action : set/weight=10; selector: dc=.*,service=mathoid,cluster=kubernetes,name=.* [production]
11:47 <moritzm> updated component/cassandra311 on apt.wikimedia.org to 3.11.2 [production]
10:26 <jynus> setting expire_log_days on db1066 as 30 [production]
10:21 <godog> bounce stuck rsyslog on lithium / wezen - T136312 [production]
09:41 <vgutierrez> cp3037 has been depooled due to unknown hardware issues T196974 [production]
08:48 <marostegui> Stop replication on db2094 to change triggers for archive table [production]
08:36 <volans> running puppet on failed hosts post small puppet outage and puppetdb reboot [production]
08:35 <akosiaris> rebalance ganeti codfw cluster [production]
08:35 <ema@neodymium> conftool action : set/pooled=no; selector: name=cp3037.esams.wmnet [production]
08:33 <akosiaris> reboot puppetdb1001 for spec-ctrl enable. Bundling it with a minor puppet outage to only have a torrent of harmless puppet failures once [production]
08:15 <akosiaris> ganeti2002 reboot for microcode update [production]
08:04 <akosiaris> ganeti2006 reboot for microcode update [production]
08:03 <marostegui> Deploy schema change on s1 codfw primary master (db2048) with replication, this will generate lag on codfw T191316 T192926 T89737 T195193 [production]
07:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1121 after alter table (duration: 00m 50s) [production]
07:43 <akosiaris> ganeti2007 reboot for microcode update [production]
07:41 <akosiaris> ganeti2003 reboot for microcode update [production]
07:31 <mutante> closing idle screen session on tin (about to be decomed, dont use anymore) [production]