2151-2200 of 10000 results (68ms)
2018-05-28 §
05:23 <marostegui> Deploy schema change on db1106 with replication, this will generate lag on labs - T190148 T191519 T188299 [production]
05:23 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1106 for alter table (duration: 01m 24s) [production]
04:47 <legoktm> running modified version of hashar's gear_client.py on contint1001, feel free to kill if it causes problems [releng]
03:14 <l10nupdate@tin> scap sync-l10n completed (1.32.0-wmf.4) (duration: 14m 30s) [production]
2018-05-27 §
07:25 <joal> Rerun webrequest-load-wf-upload-2018-5-25-23 [analytics]
07:25 <joal> rerun webrequest-load-wf-misc-2018-5-26-16 and webrequest-load-wf-misc-2018-5-27-0 [analytics]
06:02 <legoktm> deployed https://gerrit.wikimedia.org/r/435681 [releng]
01:04 <rxy> rxy@cvn-app8: modified CVNBot.ini for SWBot3, and create symlink. per https://github.com/countervandalism/infrastructure/issues/18 [cvn]
00:24 <legoktm> deployed https://gerrit.wikimedia.org/r/435673 [releng]
2018-05-26 §
23:09 <Krinkle> Killed a bunch of stuck beta-mediawiki-config-update-eqiad jobs in Jenkins [releng]
23:06 <Krinkle> beta-mediawiki-config-update-eqiad jobs have been stuck in Zuul for 17 hours [releng]
21:47 <reedy@tin> Synchronized composer.json: (no justification provided) (duration: 01m 19s) [production]
21:45 <reedy@tin> Synchronized multiversion/: multiversion (duration: 01m 21s) [production]
21:42 <reedy@tin> Synchronized vendor/: canhasvendor (duration: 01m 46s) [production]
19:01 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1101:3318 after alter table (duration: 01m 21s) [production]
14:25 <marostegui> Add tmp1 index back on db1101:3318 - T194273 [production]
14:25 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1101:3318 for alter table (duration: 01m 07s) [production]
14:17 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1087 after alter table (duration: 01m 20s) [production]
09:56 <marostegui> Add tmp1 index back on db1087 (sanitarium master), this will generate lag on labsdb hosts - T194273 [production]
09:56 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1087 for alter table (duration: 01m 20s) [production]
09:47 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1099:3318 after alter table (duration: 01m 21s) [production]
05:21 <marostegui> Add tmp1 index back on db1099:3318 - T194273 [production]
05:21 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1099:3318 for alter table (duration: 01m 21s) [production]
05:16 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1092 after alter table (duration: 01m 22s) [production]
2018-05-25 §
23:08 <legoktm> deployed https://gerrit.wikimedia.org/r/435278 https://gerrit.wikimedia.org/r/433197 https://gerrit.wikimedia.org/r/434950 https://gerrit.wikimedia.org/r/434727 https://gerrit.wikimedia.org/r/435221 [releng]
22:57 <mutante> apt.wikimedia.org - import jenkins-debian-glue_0.18.4-wmf3 for jessie-wikimedia (T193910) [production]
21:41 <hashar> cleaned up disk space on deployment-tin . Reenabled jenkins slave. [releng]
21:26 <legoktm@tin> Synchronized php-1.32.0-wmf.5/skins/MonoBook/: Temporarily remove responsive support (T195625) (duration: 01m 21s) [production]
20:59 <paladox> disable puppet on planet-hotdog doing some UI changes and also trying out more plugins [planet]
20:33 <mutante> LDAP: added user wmde-leszek to group 'nda' (T195358) [production]
17:46 <XenoRyet> updated civicrm from 4d797fc592 to 0b97f1f5b2 [production]
15:23 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1119 after alter table (duration: 01m 20s) [production]
13:47 <akosiaris> repool ulsfo, links have been stable for quite a few hours [production]
13:27 <marostegui> Deploy schema change on db1119 - https://phabricator.wikimedia.org/T190148 https://phabricator.wikimedia.org/T191519 https://phabricator.wikimedia.org/T188299 [production]
13:26 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1119 for alter table (duration: 01m 20s) [production]
13:15 <marostegui> Add indexes back on s8 codfw primary master (db2045) this will generate lag on codfw - T194273 [production]
13:14 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1105:3311 after alter table (duration: 01m 20s) [production]
12:28 <moritzm> fixed dpkg installation state on mx2001 [production]
11:32 <akosiaris> switch to SSH RSA 2048 bit keys for eqiad ganeti intracluster communication [production]
11:22 <akosiaris> upgrade eqiad ganeti cluster to ganeti 2.15.2-7+deb9u1~bpo8+1 [production]
11:21 <akosiaris> rebalance row_B codfw ganeti nodegroup. Cluster is now fully upgraded to stretch [production]
11:18 <akosiaris> powercycling ms-be1034, box is unresposive, tons of logs "sd 0:1:0:1: rejecting I/O to offline device" [production]
10:37 <XioNoX> test force mtu 1400 between cp1074 and cp3039 - T195365 [production]
09:20 <marostegui> Deploy schema change on db1105:3311 - T190148 T191519 T188299 [production]
09:20 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1105:3311 for alter table (duration: 01m 20s) [production]
09:16 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1119 after alter table (duration: 01m 19s) [production]
09:10 <marostegui@tin> scap failed: average error rate on 9/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/2cc7028226a539553178454fc2f14459 for details) [production]
09:03 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1114 after alter table (duration: 01m 20s) [production]
08:44 <marostegui> Stop MySQL on db1120 to transfer its content to db1125 - T190704 [production]
08:36 <marostegui> Add tmp1 back on db1092 - https://phabricator.wikimedia.org/T194273 [production]