2018-03-22
§
|
13:25 |
<zfilipin@tin> |
Synchronized static/images/project-logos/: SWAT: [[gerrit:421093|Change bewikibooks logo (T189218)]] (duration: 01m 16s) |
[production] |
13:23 |
<godog> |
reenabling puppet fleetwide to enable CA switch - T189891 |
[production] |
13:11 |
<ladsgroup@tin> |
Synchronized wmf-config/Wikibase.php: [[gerrit:421269|Remove forceWriteTermsTableSearchFields from testwikidatawiki, part II (T189776)]] (duration: 01m 15s) |
[production] |
13:09 |
<ladsgroup@tin> |
Synchronized wmf-config/Wikibase-production.php: [[gerrit:421269|Remove forceWriteTermsTableSearchFields from testwikidatawiki, part I (T189776)]] (duration: 01m 16s) |
[production] |
13:05 |
<godog> |
stop rsync of ca/volatile on puppetmaster1001 |
[production] |
12:31 |
<godog> |
chown puppet:puppet /var/lib/puppet/server/ssl/ca on puppetmaster2001 |
[production] |
12:20 |
<godog> |
running puppet on puppetmaster[21]001 - T189891 |
[production] |
12:12 |
<godog> |
stopping puppet fleetwide for ca migration - T189891 |
[production] |
11:20 |
<elukey> |
rolling restart of the hadoop hdfs datanode daemons on all the analytics hadoop workers for openjdk-8 upgrade |
[production] |
11:18 |
<apergos> |
and a third time to try updating the puppet compiler facts, this time using puppetmaster2001 |
[production] |
11:09 |
<arturo> |
T189722 reboot labtestvirt2002 to downgrade kernel |
[production] |
11:02 |
<moritzm> |
installing plexus-utils security updates |
[production] |
11:01 |
<arturo> |
T189722 reboot labtestvirt2001 to downgrade kernel |
[production] |
10:53 |
<apergos> |
due to miscommunication, second update of puppet compiler facts happening now. oh well |
[production] |
10:42 |
<elukey> |
update puppet compiler's fact |
[production] |
10:28 |
<ema> |
cp-upload_esams: carry on with reboots for retpoline kernel updates T188092 |
[production] |
10:10 |
<ema> |
repool cp3010 |
[production] |
09:55 |
<elukey> |
rolling restart of yarn nodemanagers on the analytics hadoop workers for openjdk-8 upgrade |
[production] |
09:21 |
<marostegui> |
Truncate updatelog on s3 - T174804 |
[production] |
09:19 |
<marostegui> |
Truncate updatelog on s1 - T174804 |
[production] |
09:04 |
<marostegui> |
Truncate updatelog on s7 - T174804 |
[production] |
08:51 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1060 (duration: 01m 15s) |
[production] |
08:45 |
<marostegui> |
Truncate updatelog on s2 - T174804 |
[production] |
08:30 |
<marostegui> |
Truncate updatelog on s4,s5,s6,s8 - T174804 |
[production] |
08:29 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool pc1006 after kernel, mariadb and socket location upgrade (duration: 01m 11s) |
[production] |
08:21 |
<jynus> |
upgrade and restart db1060 |
[production] |
08:17 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1060 (duration: 01m 15s) |
[production] |
08:06 |
<marostegui> |
Restart pt-heartbeat on pc2006 |
[production] |
08:05 |
<marostegui> |
Restart pt-heartbeat on pc2004 and pc2005 |
[production] |
08:04 |
<marostegui> |
Restart pt-heartbeat on pc1004 and pc1005 |
[production] |
07:59 |
<marostegui> |
Stop MySQL on pc1006 for kernel, mariadb and socket path upgrade |
[production] |
07:58 |
<elukey> |
depool cp3010 + powercycle (no ssh access, mgmt console frozen) |
[production] |
07:33 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool pc1006 for kernel, mariadb and socket location upgrade (duration: 01m 16s) |
[production] |
06:25 |
<marostegui> |
Remove db1001 from tendril - T190262 |
[production] |
06:25 |
<marostegui> |
Stop MySQL on db1001 to get ready to decommission it - T190262 |
[production] |
06:16 |
<marostegui> |
Reload dbproxy1006 to pick up the new standby host - T183469 |
[production] |
06:15 |
<marostegui> |
Reload dbproxy1001 to pick up the new standby host - T183469 |
[production] |
02:30 |
<l10nupdate@tin> |
scap sync-l10n completed (1.31.0-wmf.25) (duration: 07m 46s) |
[production] |
01:52 |
<ebernhardson> |
increase cluster.routing.allocation.disk.watermark.low to 80% on eqiad elasticsearch due to shards not allocating during reindex |
[production] |
01:10 |
<ebernhardson> |
started in-place reindex of all wikis on both elasticsearch clusters |
[production] |
00:02 |
<andrewbogott> |
restarted nova-network on labnet1001 and nova-compute on labvirt1015 as part of debugging T190367 |
[production] |
00:00 |
<Amir1> |
Evening SWAT is done |
[production] |
00:00 |
<ladsgroup@tin> |
Synchronized wmf-config/InitialiseSettings.php: [[gerrit:421194|guwiki: fix rollback -> rollbacker (group) (T190370)]] (duration: 01m 16s) |
[production] |
2018-03-21
§
|
23:53 |
<ladsgroup@tin> |
Synchronized wmf-config/InitialiseSettings.php: [[gerrit:421191|Migrate $wgOresModels to the new config system (T189948)]] (duration: 01m 16s) |
[production] |
23:41 |
<ladsgroup@tin> |
Synchronized wmf-config/throttle.php: [[gerrit:420807|Add new throttle rule and add task for one in comment]] (duration: 01m 16s) |
[production] |
23:36 |
<ladsgroup@tin> |
Synchronized wmf-config/InitialiseSettings.php: [[gerrit:421049|guwiki: clean up $wg{Add,Remove}Groups configuration]] (duration: 01m 16s) |
[production] |
23:21 |
<ladsgroup@tin> |
Synchronized wmf-config/InitialiseSettings.php: [[gerrit:420989|Enable $wgAbuseFilterProfile & $wgAbuseFilterRuntimeProfile on eswikibooks, part II (T190264)]] (duration: 01m 15s) |
[production] |
23:19 |
<ladsgroup@tin> |
Synchronized wmf-config/abusefilter.php: [[gerrit:420989|Enable $wgAbuseFilterProfile & $wgAbuseFilterRuntimeProfile on eswikibooks, part I (T190264)]] (duration: 01m 15s) |
[production] |
22:33 |
<eileen> |
civicrm revision changed from 3291ad35c9 to 85c89c7d0a, config revision is 03511638ed |
[production] |
22:32 |
<maxsem@tin> |
Synchronized wmf-config/InitialiseSettings.php: Revert global prefs (duration: 01m 15s) |
[production] |