3351-3400 of 10000 results (62ms)
2019-03-04 §
11:04 <akosiaris@deploy1001> scap-helm citoid finished [production]
11:04 <akosiaris@deploy1001> scap-helm citoid upgrade -f citoid-eqiad-values.yaml production stable/citoid [namespace: citoid, clusters: eqiad] [production]
11:04 <akosiaris@deploy1001> scap-helm citoid finished [production]
11:04 <akosiaris@deploy1001> scap-helm citoid cluster staging completed [production]
11:04 <akosiaris@deploy1001> scap-helm citoid upgrade -f citoid-staging-values.yaml staging stable/citoid [namespace: citoid, clusters: staging] [production]
10:53 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More weight to db1089 (duration: 00m 48s) [production]
10:38 <jdrewniak@deploy1001> Synchronized portals: Wikimedia Portals Update: [[gerrit:494191| Bumping portals to master (T128546)]] (duration: 00m 50s) [production]
10:37 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:494191| Bumping portals to master (T128546)]] (duration: 00m 50s) [production]
09:44 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1089 with low weight (duration: 00m 48s) [production]
09:27 <ariel@deploy1001> Finished deploy [dumps/dumps@932bf7e]: make misc dumps failure message nicer (duration: 00m 09s) [production]
09:27 <ariel@deploy1001> Started deploy [dumps/dumps@932bf7e]: make misc dumps failure message nicer [production]
09:22 <godog> temporarily stop prometheus on prometheus2004 to take a snapshot [production]
08:45 <gilles@deploy1001> Synchronized wmf-config/InitialiseSettings.php: T216499 Undo enabling Priority Hints origin trial on ruwiki (duration: 00m 49s) [production]
08:44 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1103:3314 (duration: 00m 49s) [production]
08:38 <gilles@deploy1001> scap failed: average error rate on 7/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) [production]
08:29 <marostegui> Change logging indexes on db1089 to leave the indexes exactly like the ones on tables.sql - T217397 [production]
08:14 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1089 - T217397 (duration: 00m 49s) [production]
07:48 <ema> cp3032/cp3042: restart varnish-be due to mbox lag [production]
07:42 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1103:3314 for schema change (duration: 00m 49s) [production]
07:39 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1097:3314 (duration: 00m 53s) [production]
07:33 <marostegui> Reload haproxy on dbproxy1010 to repool labsdb1010 [production]
07:17 <kart_> Finished manual run of unpublished ContentTranslation draft purge script (T217310) [production]
07:13 <marostegui> Remove dbstore1002 from tendril and zarcillo - T216491 [production]
07:05 <marostegui> Upgrade MySQL on db2088 and db2091 [production]
06:46 <marostegui> Stop MySQL on dbstore1002 for decommission T210478 T172410 T216491 T215589 [production]
06:38 <marostegui> Stop MySQL on labsdb1010 for mysql upgrade [production]
06:34 <gtirloni> downtimed cloudstore1008/9 (T209527) [production]
06:13 <marostegui> Upgrade MySQL on db2041 db2049 db2056 db2095 [production]
06:06 <marostegui> Run analyze table logging on db2038 and db2059 - T71222 [production]
06:05 <marostegui> Reload haproxy on dbproxy1010 to depool labsdb1010 [production]
06:04 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1094:3314 for schema change (duration: 01m 11s) [production]
05:18 <kart_> Started manual run of unpublished ContentTranslation draft purge script (T217310) [production]
2019-03-03 §
12:26 <volans|off> restarted icinga on icinga2001, stale status file, too many open files [production]
10:44 <elukey> restart pdfrender on scb1003 [production]
2019-03-02 §
12:12 <gtirloni> labstore1006 started nfsd T217473 [production]
2019-03-01 §
20:45 <ejegg> turned off fundraising omnimail process unsubscribes job [production]
19:40 <XioNoX> pre-configure asw-a8 ports on asw2-a8-eqiad - T187960 [production]
19:32 <XioNoX> pre-configure asw-a7 ports on asw2-a7-eqiad - T187960 [production]
19:29 <XioNoX> pre-configure asw-a6 ports on asw2-a6-eqiad - T187960 [production]
19:17 <XioNoX> pre-configure asw-a5 ports on asw2-a5-eqiad - T187960 [production]
18:53 <robh> notebook1003 has unusually high load recently (23) and seemed to lag in reporting to icinga. no hardware failures, pinged about it in #wikimedia-analytics [production]
16:33 <jbond42> rolling security update of bind9 packages on jessie and trusty [production]
15:38 <ema> trafficserver_8.0.2-1wm1 uploaded to stretch-wikimedia [production]
15:02 <akosiaris> restore proton config values [production]
14:33 <hashar> Updating all debian-glue Jenkins job to properly take in account the BUILD_TIMEOUT parameter # T217403 [production]
13:24 <moritzm> removed sca* hosts from debmonitor database [production]
12:49 <akosiaris> lower max_render_queue_size: to 20 for proton on proton100{1,2} [production]
12:32 <akosiaris> restart proton1002, OOM showed up [production]
12:31 <akosiaris> restart proton on proton1001, counted 99 chromium processes left running since at least Jan 30 [production]
11:47 <jbond42> rebooting labsdb1005.codfw.wmnet [production]