2601-2650 of 10000 results (68ms)
2019-12-11 ยง
13:37 <awight> EU SWAT complete [production]
13:25 <andrew-wmde@deploy1001> Synchronized php-1.35.0-wmf.8/extensions/Cite: SWAT: [[gerrit:556367|Revert "Lazily fetch user interface language to prevent cache split" ()]] (duration: 01m 02s) [production]
12:54 <andrew-wmde@deploy1001> Synchronized php-1.35.0-wmf.10/extensions/Cite: SWAT: [[gerrit:556351|Use messagelocalizer in CiteErrorReporter (T239988)]] (duration: 01m 04s) [production]
12:38 <andrew-wmde@deploy1001> scap failed: average error rate on 3/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) [production]
12:09 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: 7651c1a: GrowthExperiments: Configure testwiki to use local search & config (T235717) (duration: 01m 02s) [production]
12:03 <ladsgroup@deploy1001> Synchronized php-1.35.0-wmf.10/extensions/Wikibase/data-access: [[gerrit:556353|Fix idlookup dropping pageids (T236691 T240410)]] (duration: 01m 03s) [production]
12:00 <moritzm> installing git security updates [production]
11:57 <jbond42> draining kubernetes2003 to restart calico-node [production]
11:55 <jbond42> draining kubernetes2002 to restart calico-node [production]
11:52 <jbond42> draining kubernetes2001 to restart calico-node [production]
11:36 <jbond42> draining kubernetes1004.eqiad.wmnet to restart calico-node [production]
11:31 <jbond42> draining kubernetes1005.eqiad.wmnet to restart calico-node [production]
11:27 <jbond42> draining kubernetes1006.eqiad.wmnet to restart calico-node [production]
10:51 <jbond42> draining kubernetes1003.eqiad.wmnet to restart calico-node [production]
10:48 <jbond42> draining kubernetes1002.eqiad.wmnet to restart calico-node [production]
10:45 <marostegui> Deploy schema change on db1103:3314 [production]
10:45 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1103:3314 for schema change T233135', diff saved to https://phabricator.wikimedia.org/P9851 and previous config saved to /var/cache/conftool/dbconfig/20191211-104506-marostegui.json [production]
10:39 <jbond42> draining kubernetes1001.eqiad.wmnet to restart calico-node [production]
10:34 <Nikerabbit> Finished running Translate/refresh-translatable-pages.php --jobqueue for Translate wikis - T235027 T235188 [production]
10:03 <ema> cp-ats: apply set_server_resp_no_store patch https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/556201/ to all hosts T227432 [production]
09:45 <ema@cumin1001> conftool action : set/pooled=yes; selector: name=cp1075.eqiad.wmnet,service=ats-be [production]
09:44 <ema> cp1075: repool ats-be after successful set_server_resp_no_store test P9849 T227432 [production]
09:33 <godog> roll-restart logstash in codfw/eqiad after https://gerrit.wikimedia.org/r/c/operations/puppet/+/556173 [production]
09:25 <ema@cumin1001> conftool action : set/pooled=no; selector: name=cp1075.eqiad.wmnet,service=ats-be [production]
09:25 <ema> cp1075: depool ats-be to test set_server_resp_no_store https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/556201/ T227432 [production]
09:14 <ema> repool cp3055 T238305 [production]
09:04 <Nikerabbit> running Translate/refresh-translatable-pages.php --jobqueue for Translate wikis - T235027 T235188 [production]
08:34 <marostegui> Compress cx_corpora on x1 master (db1120) - T240325 [production]
08:34 <marostegui> Upgrade db1140 [production]
08:10 <Urbanecm> Clear signup throttle for IP 195.113.183.5 [production]
08:10 <urbanecm@deploy1001> Synchronized wmf-config/throttle.php: f62edfe: Add throttle rule for Czech student workshop (duration: 01m 02s) [production]
08:04 <elukey> powercycle cp3055 - down since hours ago, no ssh, no mgmt serial console usable [production]
08:02 <elukey@puppetmaster1001> conftool action : set/pooled=no; selector: name=cp3055.esams.wmnet [production]
07:54 <marostegui> Compress cx_corpora on db1140:3320 T240325 [production]
07:51 <marostegui> Upgrade db2096 (x1 codfw master) [production]
06:59 <marostegui> Compress cx_corpora on db2096 T240325 [production]
06:57 <marostegui> Upgrade x1 codfw [production]
06:55 <eileen> process-control config revision is f34450e3ba - turn off dedupe to do Benevity import [production]
06:46 <effie> restart graphoid on scb1001 [production]
06:44 <marostegui> Stop mysql on db1124 for upgrade [production]
06:28 <marostegui> Stop MySQL on db2070 - T239684 [production]
06:27 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db2070 from config as it will be decommissioned T239684', diff saved to https://phabricator.wikimedia.org/P9848 and previous config saved to /var/cache/conftool/dbconfig/20191211-062700-marostegui.json [production]
06:25 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Remove db2070 from config T239684 (duration: 01m 08s) [production]
06:24 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Remove db2070 from config T239684 (duration: 01m 18s) [production]
06:22 <marostegui> Remove db2070 from tendril and zarcillo T239684 [production]
06:07 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
06:07 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
06:00 <marostegui> Compress cx_corpora on db2131 T240325 [production]
05:45 <marostegui> Deploy schema change on dbstore1004:3314 [production]
00:54 <eileen> rocess-control config revision is 3f60e8fe9e [production]