1751-1800 of 10000 results (50ms)
2020-03-02 §
11:39 <jbond42> enable strict_hostname_checking on the puppet masters https://gerrit.wikimedia.org/r/c/operations/puppet/+/575220 [production]
11:12 <kart_> Update cxserver to 2020-02-28-043702-production (T246319) [production]
11:07 <kartik@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
11:05 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
11:04 <kartik@deploy1001> helmfile [EQIAD] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
11:03 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
11:02 <kartik@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . [production]
11:01 <addshore> START warm cache for db1111 & db1126 for Q8-10 million T219123 (pass 1) [production]
10:55 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q8M for the new term store for clients (was Q6M) + warm db1126 & db1111 caches (T219123) (duration: 00m 58s) [production]
10:35 <vgutierrez> reimage lvs5002 with buster - T245984 [production]
10:34 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 150 to 200 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10576 and previous config saved to /var/cache/conftool/dbconfig/20200302-103445-marostegui.json [production]
10:22 <addshore> START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 2) [production]
09:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 100 to 150 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10575 and previous config saved to /var/cache/conftool/dbconfig/20200302-095921-marostegui.json [production]
09:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10574 and previous config saved to /var/cache/conftool/dbconfig/20200302-095841-marostegui.json [production]
09:52 <moritzm> installing remaining curl security updates [production]
09:51 <addshore> START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 1) [production]
09:50 <elukey> powercycle an-worker1083 (no ssh, mgmt console available but tty not really usable, CPU soft lockups reported) [production]
09:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10573 and previous config saved to /var/cache/conftool/dbconfig/20200302-094633-marostegui.json [production]
09:38 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10572 and previous config saved to /var/cache/conftool/dbconfig/20200302-093848-marostegui.json [production]
09:38 <moritzm> installing openssh updates for jessie [production]
09:34 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 80 to 100 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10571 and previous config saved to /var/cache/conftool/dbconfig/20200302-093449-marostegui.json [production]
09:27 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10570 and previous config saved to /var/cache/conftool/dbconfig/20200302-092743-marostegui.json [production]
09:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1119 T239791', diff saved to https://phabricator.wikimedia.org/P10569 and previous config saved to /var/cache/conftool/dbconfig/20200302-091947-marostegui.json [production]
09:12 <addshore> warm cache for db1111 for Q0-6 million T219123 T246447 (pass 2) [production]
08:54 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 50 to 80 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10568 and previous config saved to /var/cache/conftool/dbconfig/20200302-085420-marostegui.json [production]
08:44 <moritzm> installing openssh updates for stretch [production]
08:33 <addshore> warm cache for db1111 for Q0-6 million T219123 T246447 [production]
08:14 <addshore> resume item term table rebuild script (from Q54 mill) T219123 [production]
08:07 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 30 to 50 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10567 and previous config saved to /var/cache/conftool/dbconfig/20200302-080721-marostegui.json [production]
07:22 <vgutierrez> upgrading NICs FW on lvs2008 - T196560 T203194 [production]
07:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 10 to 30 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10566 and previous config saved to /var/cache/conftool/dbconfig/20200302-072118-marostegui.json [production]
07:10 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
07:08 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
06:45 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 1 to 10 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10565 and previous config saved to /var/cache/conftool/dbconfig/20200302-064522-marostegui.json [production]
06:42 <marostegui> Enable events on db1111 T246447 [production]
06:24 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1111 to s8 with minimal weight to check grants and any other issues T246447', diff saved to https://phabricator.wikimedia.org/P10564 and previous config saved to /var/cache/conftool/dbconfig/20200302-062435-marostegui.json [production]
06:04 <marostegui> Re-add db1111 to s8 in tendril and zarcillo - T246447 [production]
2020-03-01 §
17:54 <marostegui> Start replication on db1111 new host on s8 - T246447 [production]
17:45 <marostegui@cumin1001> dbctl commit (dc=all): 'Reduce main traffic weight for db1087 as dumps are running ', diff saved to https://phabricator.wikimedia.org/P10563 and previous config saved to /var/cache/conftool/dbconfig/20200301-174536-marostegui.json [production]
16:08 <reedy@deploy1001> scap failed: average error rate on 5/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) [production]
06:02 <ariel@deploy1001> Finished deploy [dumps/dumps@8376c62]: refactor page content jobs, prefetch, and output file listings: see T246465 (duration: 00m 04s) [production]
06:02 <ariel@deploy1001> Started deploy [dumps/dumps@8376c62]: refactor page content jobs, prefetch, and output file listings: see T246465 [production]
2020-02-29 §
12:37 <reedy@deploy1001> Synchronized wmf-config/config/viwiki.yaml: T246511 (duration: 00m 56s) [production]
12:35 <reedy@deploy1001> Synchronized wikiversions-labs.json: T246511 (duration: 00m 56s) [production]
12:34 <reedy@deploy1001> Synchronized dblists/all-labs.dblist: T246511 (duration: 00m 57s) [production]
2020-02-28 §
21:31 <mutante> using planet1001 to manually hack APT sources to test new apt1001.wikimedia.org [production]
20:29 <pt1979@cumin2001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
20:26 <pt1979@cumin2001> START - Cookbook sre.hosts.downtime [production]
19:01 <milimetric@deploy1001> Finished deploy [analytics/refinery@0fc392f] (thin): Hotfix: going back to a safe version of geo udf (duration: 00m 07s) [production]
19:01 <milimetric@deploy1001> Started deploy [analytics/refinery@0fc392f] (thin): Hotfix: going back to a safe version of geo udf [production]