401-450 of 10000 results (26ms)
2020-10-06 §
09:38 <effie> disable puppet on mc* [production]
09:27 <klausman@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:26 <klausman@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:08 <elukey> add an-worker1114 to the hadoop cluster [analytics]
09:04 <klausman> Starting reimaging of stat1007 [analytics]
08:57 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.init-hadoop-workers (exit_code=0) [production]
08:55 <elukey@cumin1001> START - Cookbook sre.hadoop.init-hadoop-workers [production]
08:33 <jayme> imported envoyproxy_1.15.1-1+deb9u1 to stretch-wikimedia [production]
08:27 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:26 <elukey@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:02 <volans> removing unused ms-fe and ms-fe-thumbs svc records from DNS (gerrit/628086) [production]
07:53 <marostegui> Change innodb_change_buffering = inserts on db2087:3316 db2089:3316 db2076 db2097:3316 db2114 T263443 [production]
07:39 <filippo@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
07:35 <filippo@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
07:32 <elukey> bootstrap an-worker111[13] as hadoop workers [analytics]
07:31 <filippo@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
07:17 <marostegui> Remove es2015 and es2017 from tendril and zarcillo T264700 T264386 [production]
07:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es2015 T264700 ', diff saved to https://phabricator.wikimedia.org/P12926 and previous config saved to /var/cache/conftool/dbconfig/20201006-071451-marostegui.json [production]
07:05 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
06:59 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
06:01 <Matthew> Restarted bot completely, two instances refused to reconnect. [wm-bot]
05:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es2017 from dbctl T264386', diff saved to https://phabricator.wikimedia.org/P12925 and previous config saved to /var/cache/conftool/dbconfig/20201006-052849-marostegui.json [production]
2020-10-05 §
23:11 <ejegg> updated payments staging from 52704ffe24 to db03677b2d [production]
22:31 <Amir1> deleted deployment-mailman01 (T257118) [releng]
22:29 <Amir1> deleted deployment-imagescaler01 and deployment-imagescaler02 (T257118) [releng]
22:29 <mutante> deleted the shinken module [monitoring]
22:27 <mutante> removing shinken puppet module and role [production]
22:01 <ebernhardson> restore wikidatawiki_content enwiki_content enwiki_general and commonswiki_file to default index.merge.policy.deletes_pct_allowed on eqiad cirrus cluster T264053 [production]
21:58 <bstorm> setting "mtail::from_component: true" on both mx-out servers to make puppet work again [cloudinfra]
21:01 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
21:00 <Urbanecm> Run keyholder arm at deployment-cumin [releng]
21:00 <Urbanecm> Run puppet at deployment-mwmaint01 and deployment-mediawiki-07 [releng]
20:59 <andrew@cumin1001> START - Cookbook sre.hosts.downtime [production]
20:45 <hauskatze> Fixed puppet for deployment-mediawiki-07: s/memcache/memcached/ [releng]
20:30 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
20:28 <andrew@cumin1001> START - Cookbook sre.hosts.downtime [production]
20:26 <ebernhardson> restart elasticsearch_6@production-search-codfw on elastic2051 to take reduced (32 sector, 16kB) readahead settings T264053 [production]
20:13 <ebernhardson> restart elasticsearch_6@production-search-codfw on elastic2051 to take reduced (64 sector, 32kB) readahead settings T264053 [production]
19:56 <ebernhardson> restart elasticsearch_6@production-search-codfw on elastic2050 to take reduced (128kB) readahead settings T264053 [production]
19:31 <mutante> ran sre.dns.netbox to push addition of an-worker1113 which was commited in prod repo but not in netbox data [production]
19:30 <dzahn@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
19:27 <dzahn@cumin1001> START - Cookbook sre.dns.netbox [production]
19:14 <mforns> restarted oozie coord unique_devices-per_domain-monthly after deployment [analytics]
19:05 <mforns> finished deploying refinery to unblock deletion of raw mediawiki_job and raw netflow data [analytics]
18:59 <mforns@deploy1001> Finished deploy [analytics/refinery@2c6c335] (thin): [THIN] Special deployment to unblock deletion jobs [analytics/refinery@2c6c335e61cecd0321ec6f066a153feaf2dbbc27] (duration: 00m 08s) [production]
18:59 <mforns@deploy1001> Started deploy [analytics/refinery@2c6c335] (thin): [THIN] Special deployment to unblock deletion jobs [analytics/refinery@2c6c335e61cecd0321ec6f066a153feaf2dbbc27] [production]
18:58 <mforns@deploy1001> Finished deploy [analytics/refinery@2c6c335]: Special deployment to unblock deletion jobs [analytics/refinery@2c6c335e61cecd0321ec6f066a153feaf2dbbc27] (duration: 12m 08s) [production]
18:46 <mforns@deploy1001> Started deploy [analytics/refinery@2c6c335]: Special deployment to unblock deletion jobs [analytics/refinery@2c6c335e61cecd0321ec6f066a153feaf2dbbc27] [production]
18:46 <mutante> marked project for deletion in 2020 purge [planet]
18:45 <mforns> deploying refinery to unblock deletion of raw mediawiki_job and raw netflow data [analytics]