1851-1900 of 5519 results (31ms)
2023-02-16 §
16:01 <wm-bot2> Adding OSD cloudcephosd1002.eqiad.wmnet... (1/1) (T329498) - cookbook ran by dcaro@vulcanus [admin]
16:01 <wm-bot2> Adding new OSDs ['cloudcephosd1002.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
16:00 <wm-bot2> Adding OSD cloudcephosd1002.eqiad.wmnet... (1/1) (T329498) - cookbook ran by dcaro@vulcanus [admin]
16:00 <wm-bot2> Adding new OSDs ['cloudcephosd1002.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
14:05 <wm-bot2> Added 1 new OSDs ['cloudcephosd1001.eqiad.wmnet'] (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:29 <wm-bot2> Adding new OSDs ['cloudcephosd1001.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:29 <wm-bot2> Adding new OSDs ['cloudcephosd1001.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:24 <wm-bot2> Adding OSD cloudcephosd1001.eqiad.wmnet... (1/1) (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:24 <wm-bot2> Adding new OSDs ['cloudcephosd1001.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:23 <wm-bot2> Destroying OSDs with ids in [63, 62, 61, 60, 59, 58, 57, 56] on cloudcephosd1002 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:21 <wm-bot2> Depooling OSDs with ids in [63, 62, 61, 60, 59, 58, 57, 56] on cloudcephosd1002 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:14 <wm-bot2> Adding OSD cloudcephosd1001.eqiad.wmnet... (1/1) (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:14 <wm-bot2> Adding new OSDs ['cloudcephosd1001.eqiad.wmnet'] to the cluster (T329498) - cookbook ran by dcaro@vulcanus [admin]
11:20 <wm-bot2> Destroying OSDs with ids in [53, 52, 51, 50] on cloudcephosd1001 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
11:19 <wm-bot2> Depooling OSDs with ids in [53, 52, 51, 50] on cloudcephosd1001 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
11:03 <wm-bot2> Destroying OSDs with ids in [55, 54, 53, 52, 51, 50] on cloudcephosd1001 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
11:01 <wm-bot2> Depooling OSDs with ids in [55, 54, 53, 52, 51, 50] on cloudcephosd1001 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
10:59 <wm-bot2> Depooling OSDs with ids in [55, 54, 53, 52, 51, 50] on cloudcephosd1001 from eqiad1 (T329498) - cookbook ran by dcaro@vulcanus [admin]
10:15 <dcaro> purges osd daemons 48 and 40 from eqiad ceph cluster (T329709) [admin]
2023-02-15 §
14:53 <andrewbogott> deleting another 100 leaked VM images with wmcs-novastats-cephleaks [admin]
13:39 <wm-bot2> Destroying OSDs with id [48] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:14 <wm-bot2> Destroying OSDs with id [48] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:13 <wm-bot2> Destroying OSDs with id [12345] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:11 <wm-bot2> Destroying OSDs with id [12345] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:11 <wm-bot2> Destroying OSDs with id [12345] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:10 <wm-bot2> Destroying OSDs with id [[12345]] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
13:09 <wm-bot2> Destroying OSDs with id ['12345'] on cloudcephosd1001 from eqiad1 - cookbook ran by dcaro@vulcanus [admin]
2023-02-14 §
13:17 <andrewbogott> restarting all eqiad1 openstack services because that seems to sometimes help things *shrug* [admin]
2023-02-13 §
14:06 <wm-bot2> Set the ceph cluster for eqiad1 in maintenance, alert silence ids: 8fbf6bfd-eec1-4d81-8e0d-ea431d8411ee (T329498) - cookbook ran by dcaro@vulcanus [admin]
13:32 <taavi> re-enable puppet on labstore1004 T329377 [admin]
2023-02-09 §
21:17 <andrewbogott> deleted 10% of leaked VM ceph images using wmcs-novastats-cephleaks (only 10% out of an abundance of caution) [admin]
2023-02-08 §
17:08 <arturo> changing to cloudgw network setup, make VIPs /32 (T295774) [admin]
2023-02-07 §
11:26 <arturo> [codfw1dev] testing network changes in cloudgw, expect unrealiable network (T295774) [admin]
2023-02-04 §
13:44 <taavi> drop old columns from oathauth_users table on labtestwiki T328131 [admin]
2023-02-03 §
15:00 <andrewbogott> restarted nova services in eqiad1 in an attempt to eke out another day or two of stability [admin]
14:13 <taavi> attached GrapheSuppression developer account to wikitech [admin]
2023-02-02 §
13:14 <dcaro_away> draining osd.48 from node cloudcephosd1001 (T316544) [admin]
12:57 <wm-bot2> Set the ceph cluster for eqiad1 in maintenance, alert silence ids: 7ac2b25a-d1bb-4789-8aa6-b9435b505349 (T316544) - cookbook ran by dcaro@vulcanus [admin]
2023-01-30 §
22:34 <wm-bot2> Upgraded and rebooted host cloudrabbit1002.wikimedia.org - cookbook ran by andrew@bullseye [admin]
21:34 <andrewbogott> merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/884922 and upgrading rabbitmq nodes for T328155 [admin]
2023-01-27 §
20:08 <wm-bot2> Upgraded and rebooted host cloudcontrol2005-dev.wikimedia.org - cookbook ran by andrew@bullseye [admin]
19:22 <wm-bot2> Upgraded and rebooted host cloudcontrol2004-dev.wikimedia.org - cookbook ran by andrew@bullseye [admin]
19:10 <wm-bot2> Upgraded and rebooted host cloudcontrol2001-dev.wikimedia.org - cookbook ran by andrew@bullseye [admin]
15:25 <andrewbogott> restarting openstack services in eqiad1, another attempt to address instability [admin]
2023-01-26 §
20:34 <andrewbogott> shutting down mariadb on cloudbackup2001-dev, testing the waters for T328079 [admin]
2023-01-22 §
03:42 <andrewbogott> reset eqiad1 rabbitmq in an attempt to resolve some mild instability [admin]
2023-01-20 §
15:26 <wm-bot2> Removed cloudweb hosts (cloudweb2002-dev.wikimedia.org) from maintenance mode. - cookbook ran by andrew@bullseye [admin]
15:26 <wm-bot2> Put cloudweb hosts (cloudweb2002-dev.wikimedia.org) into maintenance mode (downtime id: ['f47a3d91-b270-4c90-acc8-d85075a6bf8e'], use this to unset) - cookbook ran by andrew@bullseye [admin]
13:15 <arturo> reinstall python3-neutron (to reset manual patching) on all cloudnet nodes and patch it via puppet, then restart neutron-l3-agent by hand (T327463) [admin]
10:12 <arturo> [codfw1dev] failover neutron-l3-agent between cloudnet2005-dev/cloudnet2006-dev a couple of times T327463 [admin]