901-950 of 10000 results (77ms)
2024-01-25 §
16:48 <eevans@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 30 days, 0:00:00 on restbase2013.codfw.wmnet with reason: Decommissioning — T352469 [production]
16:48 <eevans@cumin1002> START - Cookbook sre.hosts.downtime for 30 days, 0:00:00 on restbase2013.codfw.wmnet with reason: Decommissioning — T352469 [production]
16:43 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for 32 hosts [production]
16:42 <cmooney@cumin1002> START - Cookbook sre.hosts.remove-downtime for 32 hosts [production]
16:42 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for cr[1-2]-codfw [production]
16:41 <cmooney@cumin1002> START - Cookbook sre.hosts.remove-downtime for cr[1-2]-codfw [production]
16:34 <cgoubert@cumin2002> conftool action : set/pooled=yes; selector: name=parse2007.codfw.wmnet [production]
16:34 <claime> repooling parse2007 - T355549 [production]
16:33 <cgoubert@cumin2002> conftool action : set/pooled=yes; selector: name=parse2006.codfw.wmnet [production]
16:33 <claime> repooling parse2006 - T355549 [production]
16:32 <claime> uncordoning kubernetes2023 - T355549 [production]
16:32 <claime> uncordoning kubernetes2032 - T355549 [production]
16:29 <claime> uncordoning kubernetes2031 - T355549 [production]
16:13 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2179 (T354336)', diff saved to https://phabricator.wikimedia.org/P55691 and previous config saved to /var/cache/conftool/dbconfig/20240125-161320-marostegui.json [production]
16:03 <topranks> Network maintenance codfw rack b5 underway T355549 [production]
15:58 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:30:00 on 32 hosts with reason: Migrating servers in codfw rack B5 to lsw1-b5-codfw T355549 [production]
15:58 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2179', diff saved to https://phabricator.wikimedia.org/P55690 and previous config saved to /var/cache/conftool/dbconfig/20240125-155813-marostegui.json [production]
15:58 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:30:00 on 32 hosts with reason: Migrating servers in codfw rack B5 to lsw1-b5-codfw T355549 [production]
15:57 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:30:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:57 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:30:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:54 <arnaudb@cumin1002> dbctl commit (dc=all): 'preparing to clone db2169 on db2196 as per TT343674', diff saved to https://phabricator.wikimedia.org/P55689 and previous config saved to /var/cache/conftool/dbconfig/20240125-155450-arnaudb.json [production]
15:51 <topranks> disabling puppet fleet-wide to allow for maintenance in codfw rack b5 which hosts puppetmaster2003 T355549 [production]
15:46 <topranks> configuring lsw1-b5-codfw switch ports for servers to be moved T355549 [production]
15:46 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on asw-b-codfw,lsw1-b5-codfw.mgmt with reason: prepping for server uplink migration [production]
15:46 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on asw-b-codfw,lsw1-b5-codfw.mgmt with reason: prepping for server uplink migration [production]
15:43 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2179', diff saved to https://phabricator.wikimedia.org/P55688 and previous config saved to /var/cache/conftool/dbconfig/20240125-154307-marostegui.json [production]
15:33 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: wcqs::public [production]
15:28 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2179 (T354336)', diff saved to https://phabricator.wikimedia.org/P55687 and previous config saved to /var/cache/conftool/dbconfig/20240125-152801-marostegui.json [production]
15:25 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-role for role: wcqs::public [production]
15:20 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: wdqs::internal [production]
15:20 <hnowlan@puppetmaster1001> conftool action : set/pooled=no; selector: name=maps2006.cofw.wmnet [production]
15:19 <hnowlan@deploy2002> helmfile [codfw] DONE helmfile.d/services/tegola-vector-tiles: apply [production]
15:18 <hnowlan@deploy2002> helmfile [codfw] START helmfile.d/services/tegola-vector-tiles: apply [production]
15:10 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-role for role: wdqs::internal [production]
14:35 <cgoubert@cumin2002> conftool action : set/pooled=inactive; selector: name=parse2007.codfw.wmnet [production]
14:35 <claime> Depooling parse2007 (setting inactive) - T355549 [production]
14:34 <cgoubert@cumin2002> conftool action : set/pooled=inactive; selector: name=parse2006.codfw.wmnet [production]
14:34 <claime> Depooling parse2006 (setting inactive) - T355549 [production]
14:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2179 (T354336)', diff saved to https://phabricator.wikimedia.org/P55684 and previous config saved to /var/cache/conftool/dbconfig/20240125-142729-marostegui.json [production]
14:27 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db2179.codfw.wmnet with reason: Maintenance [production]
14:27 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 8:00:00 on db2179.codfw.wmnet with reason: Maintenance [production]
14:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2172 (T354336)', diff saved to https://phabricator.wikimedia.org/P55683 and previous config saved to /var/cache/conftool/dbconfig/20240125-142706-marostegui.json [production]
14:26 <moritzm> installing debmonitor-client 0.3.4 fleet-wide [production]
14:25 <claime> Draining kubernetes2023 - T355549 [production]
14:25 <claime> Draining kubernetes2033 - T355549 [production]
14:23 <claime> Draining kubernetes2032 - T355549 [production]
14:21 <claime> Draining kubernetes2031 - T355549 [production]
14:21 <marostegui@cumin1002> dbctl commit (dc=all): 'db2129 (re)pooling @ 100%: After T355885', diff saved to https://phabricator.wikimedia.org/P55682 and previous config saved to /var/cache/conftool/dbconfig/20240125-142102-root.json [production]
14:18 <btullis@cumin1002> END (PASS) - Cookbook sre.hadoop.roll-restart-workers (exit_code=0) restart workers for Hadoop test cluster: Roll restart of jvm daemons for openjdk upgrade. [production]
14:15 <moritzm> failover ganeti master for codfw to ganeti2020 T355549 [production]