2401-2450 of 10000 results (34ms)
2018-04-12 §
07:17 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1078 after alter table - T190780 (duration: 01m 16s) [production]
07:11 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1078 for alter table - T190780 (duration: 01m 17s) [production]
06:57 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1077 after alter table - T190780 (duration: 01m 17s) [production]
06:53 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1077 for alter table - T190780 (duration: 01m 17s) [production]
06:48 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1072 after alter table - T190780 (duration: 01m 16s) [production]
06:42 <marostegui> Deploy schema change on db1072 (sanitarium master for s3) - this will generate lag on s3 labsdb - T190780 [production]
06:36 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1072 for alter table - T190780 (duration: 01m 18s) [production]
06:27 <marostegui> Deploy schema change on s3 codfw master (db2043) - this will generate lag on s3 codfw -T190780 [production]
06:24 <marostegui> Deploy schema change on s1 primary master (db1052) - T190780 [production]
06:11 <marostegui> Deploy schema change on s7 primary master (db1062) - T190780 [production]
06:08 <elukey> force kill of fuse_dfs (handling /mnt/hdfs) on stat1004, apparently causing a huge load [production]
06:05 <elukey> force kill of fuse_dfs (handling /mnt/hdfs) on stat1005, apparently causing a huge load [production]
05:52 <marostegui> Deploy schema change on s2 primary master (db1054) - T190780 [production]
05:49 <marostegui> Deploy schema change on s8 primary master (db1071) - T190780 [production]
05:45 <marostegui> Deploy schema change on s4 primary master (db1068) - T190780 [production]
05:39 <marostegui> Deploy schema change on s6 primary master (db1061) - T190780 [production]
05:34 <marostegui> Deploy schema change on s5 primary master (db1070) - T190780 [production]
05:27 <marostegui> Deploy schema change on db1109 - T187089 T185128 T153182 [production]
05:25 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1109 for alter table (duration: 01m 17s) [production]
05:15 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1099:3318 after alter table (duration: 01m 18s) [production]
05:11 <marostegui> Reload haproxy on dbproxy1011 to repool labsdb1009 [production]
02:38 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.28) (duration: 07m 20s) [production]
01:34 <eileen> civicrm revision changed from 07bade75a2 to b3326dbf70, config revision is 853fcc9111 (deploy wmffraud report) [production]
00:44 <twentyafterfour> The hotfix that I deployed for phabricator: https://phabricator.wikimedia.org/rPHEX7801b519442eea2bfd47a272ba36959b487ae7d7 [production]
00:33 <twentyafterfour> phabricator: hotfixing DeadlineEditEngineSubtype.php [production]
00:23 <twentyafterfour> phabricator is back [production]
00:18 <twentyafterfour> phabricator will be offline for just a moment while I run the upgrade script. [production]
00:15 <twentyafterfour> preparing to deploy phabricator rPHDEP/release/2018-04-12/1 https://phabricator.wikimedia.org/project/view/3335/ [production]
00:09 <mutante> jerkins-bot tests all return -1 due to operations-mw-config-php55lint failing which says it can't clone on integration-slave-jessie-1003, which is out of disk space in /srv as reported by shinken. it's mostly all /srv/pbuilder [production]
00:08 <twentyafterfour> phabricator update will begin shortly, running a bit behind due to a massive upstream merge which will have to wait until later date. [production]
00:08 <maxsem@tin> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/425723/ (duration: 01m 18s) [production]
2018-04-11 §
23:48 <ejegg> enabled new civicrm contact de-dupe job [production]
23:19 <dereckson@tin> Synchronized wmf-config/InitialiseSettings.php: Allow sysops to create Flow boards on euwiki (T190500) (duration: 01m 17s) [production]
23:09 <dereckson@tin> Synchronized wmf-config/InitialiseSettings.php: Stop logging autopatrol actions everywhere (T184485) (duration: 01m 18s) [production]
22:47 <samwilson@tin> Synchronized wmf-config/InitialiseSettings.php: Deploy GlobalPreferences T184121 (duration: 01m 17s) [production]
22:47 <mutante> ores2* - puppet ran to change venv config, then 'rm -rf /srv/deployment/ores/venv/' via cumin to clean-up (T181071) [production]
22:41 <mutante> ores1002-1009 - deleting old venv dir - rm -f /srv/deployment/ores/venv (T181071) [production]
22:37 <mutante> ores1001 - rm -rf /srv/deployment/ores/venv/ [production]
22:37 <mutante> ores - same for codfw instances, change of venv path to /srv/deployment/ores/deploy/venv/ [production]
22:30 <mutante> ores - all eqiad instances are being restarted by puppet after config change [production]
22:28 <mutante> ores - running puppet on all instances to apply venv path change for T181071 [production]
22:24 <musikanimal@tin> Synchronized wmf-config/InitialiseSettings.php: Enabling PageAssessments on huwiki (T191697) (duration: 01m 17s) [production]
22:23 <bstorm_> views updated on labsdb1009 [production]
22:13 <musikanimal@tin> Synchronized wmf-config/InitialiseSettings.php: Enabling PageAssessments on frwiki (T153393) (duration: 01m 26s) [production]
20:36 <urandom> increase change-prop sample rate in dev env to 40% (from 20) -- T186751 [production]
20:20 <awight@tin> Finished deploy [ores/deploy@b6deb5d]: Transitional virtualenv for ORES (take 2), T181071 (duration: 18m 34s) [production]
20:02 <thcipriani@tin> Synchronized php: group1 to 1.31.0-wmf.29 (duration: 01m 16s) [production]
20:02 <awight@tin> Started deploy [ores/deploy@b6deb5d]: Transitional virtualenv for ORES (take 2), T181071 [production]
20:00 <thcipriani@tin> rebuilt and synchronized wikiversions files: group1 to 1.31.0-wmf.29 [production]
19:23 <thcipriani@tin> rebuilt and synchronized wikiversions files: Group0 to 1.31.0-wmf.29 [production]