3301-3350 of 10000 results (59ms)
2018-06-28 §
00:05 <twentyafterfour> taking apache offline momentarily on phab1001 [production]
2018-06-27 §
23:57 <twentyafterfour> phabricator deployment is coming up in just a couple of minutes. There will be downtime while I run database migrations. [production]
23:10 <thcipriani@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:442356|Turning on page creation log for most wikis]] T196400 (duration: 00m 58s) [production]
21:50 <elukey> piwik maintenance on bohrium completed [production]
21:42 <XioNoX> setting BFD of the Zayo eqiad-codfw link to standard of 300 [production]
20:02 <dduvall@deploy1001> Synchronized php: (no justification provided) (duration: 00m 57s) [production]
20:02 <SMalyshev> applied fix for T197447 to eqiad wdqs cluster, which involved restart of the services [production]
19:54 <dduvall@deploy1001> rebuilt and synchronized wikiversions files: Group1 rolled back to 1.32.0-wmf.8 [production]
19:52 <marxarelli> Rolling back group1 due to rise in error rate (T198350) [production]
19:22 <marxarelli> errors seem due to "INSERT INTO `revision_comment_temp`" statements and lock wait timeout [production]
19:18 <marxarelli> seeing rising "Wikimedia\Rdbms\DBQueryError from line 1443 of /srv/mediawiki/php-1.32.0-wmf.10/includes/libs/rdbms/database/Database.php: A database query error has occurred. Did you forget to run your application's database schema update..." errors [production]
19:16 <dduvall@deploy1001> Synchronized php: group1 wikis to 1.32.0-wmf.10 (duration: 00m 58s) [production]
19:15 <dduvall@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.32.0-wmf.10 [production]
17:57 <XioNoX> updating NTP servers on network devices [production]
17:53 <mobrovac@deploy1001> Finished deploy [proton/deploy@cd6ed94]: Update proton to 491e966 - T186748 T197856 (duration: 00m 35s) [production]
17:52 <mobrovac@deploy1001> Started deploy [proton/deploy@cd6ed94]: Update proton to 491e966 - T186748 T197856 [production]
16:39 <marostegui> Deploy schema change on dbstore1002:s8 T191316 T192926 T89737 T195193 [production]
15:38 <thcipriani@deploy1001> Synchronized README: Scap 3.8.3-1 noop test sync-file (duration: 00m 56s) [production]
15:36 <marostegui> Stop replication on db2094:3318 to update triggers on archive table [production]
15:30 <godog> upload scap 3.8.3 - T198277 [production]
14:51 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1067 (duration: 00m 56s) [production]
13:57 <dcausse> EU swat done [production]
13:42 <dcausse@deploy1001> Finished scap: wmf-config Add cirrussearch settings for wikibase (1/3) (duration: 05m 41s) [production]
13:40 <moritzm> uploaded debmonitor 0.1.5 to apt.wikimedia.org [production]
13:36 <dcausse@deploy1001> Started scap: wmf-config Add cirrussearch settings for wikibase (1/3) [production]
13:24 <zfilipin@deploy1001> Synchronized wmf-config/CommonSettings.php: SWAT: [[gerrit:442146|Change FileImporter config data location (T198050)]] (duration: 00m 57s) [production]
13:07 <elukey> piwik upgraded to 3.2.1 on bohrium + started the db migration procedure (will last 2/3h probably) [production]
12:49 <vgutierrez> Upgrade librdkafka1 and restart varnishkafka-webrequest in cache::upload nodes - T182993 [production]
11:31 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1076 after alter table (duration: 00m 57s) [production]
11:26 <marostegui> Deploy schema change on s8 codfw master (db2045) with replication, this will generate lag on s8 codfw T191316 T192926 T89737 T195193 [production]
10:38 <gehel> removing maps-test2003 from cluster for reimage - T198290 [production]
10:36 <volans@deploy1001> Finished deploy [debmonitor/deploy@9536ebf]: CSP header hotfix (duration: 00m 22s) [production]
10:36 <volans@deploy1001> Started deploy [debmonitor/deploy@9536ebf]: CSP header hotfix [production]
10:24 <marostegui> Deploy schema change on db1076 T191316 T192926 T89737 T195193 [production]
10:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1076 for alter table (duration: 00m 56s) [production]
10:16 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1074 after alter table (duration: 00m 57s) [production]
10:11 <jynus> stopping db1067 and reimage it [production]
09:38 <volans@deploy1001> Finished deploy [debmonitor/deploy@052a9ea]: Release v0.1.5 (duration: 00m 24s) [production]
09:38 <volans@deploy1001> Started deploy [debmonitor/deploy@052a9ea]: Release v0.1.5 [production]
09:14 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1067 (duration: 00m 56s) [production]
08:35 <marostegui> Deploy schema change on db1074 with replication, this will generate lag on s2 on labsdb T191316 T192926 T89737 T195193 [production]
08:32 <marostegui> Stop replication on db1074 to remove triggers from db1125 - T192926 [production]
08:32 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1074 for alter table (duration: 00m 57s) [production]
08:26 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1090:3312 after alter table (duration: 00m 57s) [production]
07:58 <vgutierrez> Reinstall acamar & achernar as spare systems [production]
07:50 <vgutierrez@puppetmaster1001> conftool action : set/pooled=yes; selector: name=dns4001.wikimedia.org [production]
07:38 <vgutierrez@puppetmaster1001> conftool action : set/pooled=no; selector: name=dns4001.wikimedia.org [production]
07:37 <vgutierrez> Depool dns4001 for server restart - T198215 [production]
05:08 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1090:3312 for alter table (duration: 01m 06s) [production]
05:08 <marostegui> Deploy schema change on db1090:3312 T191316 T192926 T89737 T195193 [production]