6751-6800 of 10000 results (55ms)
2017-12-19 §
10:53 <elukey> reboot conf2001 for kernel updates - T179943 [production]
10:52 <moritzm> upgrading pdns-recursor on nescio to 4.0.4+deb9u3~bpo8+1 (security fix) [production]
10:47 <elukey> restart zookeeper on conf2001 for jvm updates - T179943 [production]
10:45 <jynus> disabling puppet on dbproxies for 398450 deploy [production]
10:38 <godog> rollout updated version of prometheus-nutcracker-exporter [production]
09:12 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1097:3315 - T161294 (duration: 00m 51s) [production]
08:56 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 - T161294 (duration: 00m 51s) [production]
08:35 <moritzm> reimaging mw1317 (video scaler) to stretch [production]
08:28 <marostegui> Stop replication in sync on db2045 and db1109 - T161294 [production]
08:21 <moritzm> installing openssl security updates [production]
08:05 <jmm@puppetmaster1001> conftool action : set/pooled=yes; selector: mw2246.codfw.wmnet [production]
08:05 <jmm@puppetmaster1001> conftool action : set/pooled=yes; selector: mw2119.codfw.wmnet [production]
07:00 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1106 - T161294 (duration: 00m 51s) [production]
06:53 <mobrovac@tin> Finished deploy [restbase/deploy@2b75a64]: Bug fix: Add the time_to_live config option to the Parsoid module (duration: 04m 26s) [production]
06:51 <marostegui> Stop replication in sync on db1106 and db2052 - T161294 [production]
06:49 <mobrovac@tin> Started deploy [restbase/deploy@2b75a64]: Bug fix: Add the time_to_live config option to the Parsoid module [production]
06:40 <marostegui> Stop replication in sync on db1106 and dbstore1002 s5 - T161294 [production]
06:29 <marostegui> Stop replication in sync on db1100 and db1106 - T161294 [production]
06:26 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 - T161294 (duration: 00m 53s) [production]
06:09 <marostegui> Deploy schema change on db1065 (s1 sanitarium master) with replication, so some lag will be generated on labs - T174569 [production]
05:18 <andrewbogott> restarting slapd on seaborgium (in response to ldap complaints on the grid master) [production]
02:24 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.12) (duration: 05m 22s) [production]
00:44 <mutante> einsteinium: sudo systemctl restrart ircecho (alias kick-icinga-wm) [production]
2017-12-18 §
22:34 <ejegg> updated payments-wiki from f594dfa763 to e91db27108 [production]
21:00 <mutante> uranium - apt-get remove ganglia-webfrontend, apache2 [production]
20:53 <mutante> ganglia.wikimedia.org shut down just now after a deprecation period - service is out of commission - T177225 [production]
20:53 <chasemp> reboot labtestvirt2003 [production]
20:49 <mutante> install1002/2002 - killing all ganglia processes, decoming aggregators [production]
20:48 <bawolff@tin> Synchronized php-1.31.0-wmf.12/extensions/TemplateData/TemplateDataBlob.php: T118682 (duration: 00m 52s) [production]
19:49 <robh@puppetmaster1001> conftool action : set/pooled=no; selector: name=cp4032.ulsfo.wmnet [production]
18:42 <moritzm> installing xml2 updates from stretch point release [production]
18:28 <moritzm> installing libxkbcommon updates from stretch point release [production]
18:11 <moritzm> installing python updates from stretch point release [production]
17:51 <elukey> run kafka preferred-replica-election on the analytics cluster to allow kafka1023 (new node) to become a partition leader [production]
16:23 <demon@tin> Pruned MediaWiki: 1.31.0-wmf.11 [keeping static files] (duration: 01m 18s) [production]
16:17 <demon@tin> Pruned MediaWiki: 1.31.0-wmf.8 (duration: 04m 58s) [production]
16:15 <elukey@puppetmaster1001> conftool action : set/pooled=inactive; selector: name=mw133[0-7].eqiad.wmnet [production]
16:09 <thcipriani@tin> Synchronized README: noop sync to test scap 3.7.4-3 (duration: 03m 02s) [production]
16:01 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1099:3311 - T174569 (duration: 03m 03s) [production]
15:47 <marostegui> Stop MySQL on db1111 to copy its content to db1112 - T180788 [production]
15:45 <jynus> stop and upgrade db1107 T183123 [production]
15:37 <marostegui> Stop db1100 and dbstore1002 in sync - T161294 [production]
15:28 <elukey@puppetmaster1001> conftool action : set/pooled=no; selector: name=mw1329.eqiad.wmnet [production]
15:23 <moritzm> uploaded prometheus-blazegraph-exporter, prometheus-wdqs-updater-exporter and prometheus-pdns-exporter to apt.wikimedia.org [production]
15:16 <chasemp> reboot labtestvirt2003 [production]
14:41 <ema> upgrade pinkunicorn to latest jessie point release (8.10) T182656 [production]
14:13 <elukey> temporarily stopped mysql consumers on eventlog1001 to ease a mysql backup on db1107 - T183123 [production]
13:58 <jynus> starting one-time backup of eventlogging database on db1107:/srv/backups T183123 [production]
13:29 <marostegui> Stop replicaiton in sync on db1109 and db2045 - T161294 [production]
13:25 <jmm@puppetmaster1001> conftool action : set/pooled=yes; selector: mw1307.eqiad.wmnet [production]