2016-05-19
§
|
18:23 |
<cscott> |
re-attempting parsoid deploy of 67816adf |
[production] |
18:22 |
<cscott> |
to cleanup Parsoid repos ori ran: salt 'wtp*' cmd.run "sed -i -e '/106801025/d' /srv/deployment/parsoid/deploy/src/lib/api/routes.js" |
[production] |
18:13 |
<cscott> |
parsoid deploy reverted to parsoid/deploy-sync-20160504-200410 tag (b0d015fa); 21 repos still dirty |
[production] |
18:09 |
<cscott> |
starting to revert Parsoid deploy due to unresolved dirty repos |
[production] |
18:05 |
<cscott> |
git-deploy of Parsoid failed with "21/44 minions completed checkout" due to dirty repos, root had applied patch during the restbase/changeprop/parsoid outage. |
[production] |
17:39 |
<cscott> |
starting Parsoid deploy (of 67816adf) |
[production] |
17:25 |
<elukey> |
execute sysctl -w net.netfilter.nf_conntrack_max=512000 on kafka1013 as temporary measure (investigating why conntrack count is higher after leader election) - T135557 |
[production] |
17:13 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1033 after maintenance with low weight; increase db1029 weight (duration: 00m 29s) |
[production] |
16:31 |
<volans> |
Set runtime value for max_allowed_packet, innodb_buffer_pool_dump_at_shutdown, innodb_buffer_pool_load_at_startup to their configured values for s1-s7, es1-es3, x1 T133333 |
[production] |
16:31 |
<urandom> |
Disabling puppet on xenon.eqiad.wmnet in preparation for Cassandra upgrade : T126629 |
[production] |
16:29 |
<elukey> |
upgrading cassandra from 2.1.12 to 2.1.13 on aqs1001.eqiad.mwnet |
[production] |
15:48 |
<thcipriani@tin> |
Synchronized php-1.28.0-wmf.1/extensions/VisualEditor/ApiVisualEditor.php: SWAT: [[gerrit:289586|Debug log strange-looking ETags being sent to RB]] (duration: 00m 29s) |
[production] |
15:41 |
<thcipriani@tin> |
Synchronized php-1.28.0-wmf.2/extensions/VisualEditor/ApiVisualEditor.php: SWAT: [[gerrit:289587|Debug log strange-looking ETags being sent to RB]] (duration: 00m 44s) |
[production] |
15:16 |
<hashar> |
Restarted zuul-merger daemons on both gallium and scandium : file descriptors leaked |
[production] |
14:20 |
<volans@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1029 (x1) with low weight - T112079 (duration: 00m 40s) |
[production] |
14:18 |
<akosiaris> |
enable puppet on maps-test200{2,3,4}. |
[production] |
14:02 |
<akosiaris> |
enabled and ran puppet on maps-test2001 |
[production] |
13:58 |
<akosiaris> |
disable puppet on maps-test200{1,2,3,4} for enabling cassandra metrics collection selectively |
[production] |
13:31 |
<chasemp> |
reboot labstore1003 kernel upgrade |
[production] |
12:54 |
<godog> |
bounce carbon-c-relay on graphite1001, run with debug version |
[production] |
12:45 |
<elukey> |
restarted oozie on analytics1003 for security upgrades |
[production] |
12:28 |
<elukey> |
restarted hue on analytics1027 for security upgrades |
[production] |
11:26 |
<moritzm> |
restarting salt-master on neodymium |
[production] |
11:09 |
<ori> |
dropped negative values from mc_get_hits_rate ganglia metrics for eqiad memcached hosts by running https://phabricator.wikimedia.org/P3138 |
[production] |
10:49 |
<volans> |
db1029 stop, backup and reimage T112079 |
[production] |
10:48 |
<jynus> |
db1033 stop, backup and reimage T134555 |
[production] |
10:41 |
<volans> |
Disable puppet on db1029 for reimaging T112079 |
[production] |
10:03 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1033 (s7 old master) & db1029 (x1-slave) for maintenance (duration: 02m 05s) |
[production] |
09:51 |
<moritzm> |
restarting apache2 on pallaium (will impose a few temporary puppet failures) |
[production] |
09:46 |
<moritzm> |
restarting apache2 on strontium (will impose a few temporary puppet failures) |
[production] |
09:39 |
<hashar> |
Restarting Jenkins |
[production] |
09:23 |
<kart_> |
updated cxserver to 4aaec58 |
[production] |
09:22 |
<moritzm> |
restarting apache on neon (hosting icinga) for security update |
[production] |
09:08 |
<moritzm> |
restarting apache on silver (hosting wikitech) for security update |
[production] |
08:35 |
<hashar> |
gallium: purging old Linux kernel packages (~2.2Gbytes) |
[production] |
08:27 |
<moritzm> |
restarting apache on ytterbium (hosting gerrit.wikimedia.org) for security update |
[production] |
08:06 |
<moritzm> |
rolling restart of hhvm on mediawiki in eqiad to pick up expat security update |
[production] |
07:17 |
<jynus> |
performing schema change on s4 T130692 |
[production] |
07:04 |
<moritzm> |
installed chromium security updates on osmium |
[production] |
06:11 |
<gehel> |
completed rolling restart of Elasticsearch codfw for Java update (T135499) |
[production] |
03:10 |
<ejegg|away> |
updated fundraising tools from 220afdeaa36bc3feaaff1f781e7761d7878c4ee8 to b2425aef2154d6b689900f4848cca02880321230 |
[production] |
02:47 |
<ejegg|away> |
updated misc fundraising tools from e2978024e6f6b6881d087ac5d07e4c40f7374709 to 220afdeaa36bc3feaaff1f781e7761d7878c4ee8 |
[production] |
02:06 |
<ejegg> |
enabled banner history queue consumer |
[production] |
02:02 |
<ejegg> |
updated civicrm from 7952ba43a012cb6a2e8d16af19bb13ed520bd56f to b7b46740d701942507dca0a98a75f3f87b6b31b1 |
[production] |
01:21 |
<twentyafterfour> |
Phabricator upgrade completed and service restored. |
[production] |
01:15 |
<twentyafterfour> |
Phabricator deployment T134443 starting momentarily. Downtime should be minimal but there will be a short interruption while the service restarts. |
[production] |