2012-12-18
§
|
23:40 |
<maxsem> |
Finished syncing Wikimedia installation... : https://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2012-12-18 |
[production] |
22:59 |
<notpeter> |
temp putting ganglia.w.o behind htaccess for sec reasons |
[production] |
22:54 |
<maxsem> |
Started syncing Wikimedia installation... : https://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2012-12-18 |
[production] |
22:46 |
<aaron> |
synchronized wmf-config/CommonSettings.php 'switched new captcha setting from testwiki -> test2wiki' |
[production] |
22:43 |
<aaron> |
synchronized wmf-config/filebackend.php 'set captcha directory.' |
[production] |
22:43 |
<Ryan_Lane> |
deploying Andrew Otto's group changes to OpenStackManager to labsconsole |
[production] |
22:07 |
<aaron> |
synchronized php-1.21wmf6/extensions/ConfirmEdit/captcha.py |
[production] |
21:32 |
<cmjohnson1> |
authdns-update new dns entries for frack bastion host (tellurium) |
[production] |
21:08 |
<bsitu> |
synchronized wmf-config/InitialiseSettings.php 'Turns off Echo temporarily on test and test2' |
[production] |
20:48 |
<bsitu> |
synchronized wmf-config/CommonSettings.php 'Update Echo config file' |
[production] |
20:47 |
<py> |
gracefulled all apaches |
[production] |
20:45 |
<notpeter> |
gracefulling all apaches to pick up https://gerrit.wikimedia.org/r/#/c/38521/ (tested good on srv193) |
[production] |
20:44 |
<hashar> |
running puppet on gallium. |
[production] |
20:44 |
<hashar> |
Zuul: applying "filters events by user email" to our Zuul deployment https://review.openstack.org/#/c/17609/ |
[production] |
20:27 |
<bsitu> |
synchronized wmf-config/CommonSettings.php |
[production] |
20:26 |
<bsitu> |
synchronized wmf-config/InitialiseSettings.php |
[production] |
18:31 |
<LeslieCarr> |
asw-c-eqiad ae bundles went down, working on fixing |
[production] |
15:45 |
<MaxSem> |
Testing done, 40 concurrent processes hitting around the worst-case point kept the load on yttrium at 20%. Average response time ~430ms |
[production] |
15:19 |
<MaxSem> |
Load-testing spatial search |
[production] |
14:14 |
<hashar> |
restarting Zuul with https://gerrit.wikimedia.org/r/39082 so it starts voting Verified+2 |
[production] |
14:13 |
<^demon> |
restarting gerrit on manganese to pick up VRIF+2 |
[production] |
13:50 |
<hashar> |
restarted puppet on gallium (some apt-get process was a zombie) |
[production] |
09:25 |
<nikerabbit> |
synchronized wmf-config/CommonSettings.php 'Bug 43075' |
[production] |
06:30 |
<andrewbogott> |
switched all labs instances to mount /home via gluster on next reboot |
[production] |
06:29 |
<andrewbogott> |
rsynced all labs homedirs to gluster volumes |
[production] |
02:46 |
<LocalisationUpdate> |
completed (1.21wmf5) at Tue Dec 18 02:45:58 UTC 2012 |
[production] |
02:25 |
<LocalisationUpdate> |
completed (1.21wmf6) at Tue Dec 18 02:25:22 UTC 2012 |
[production] |
01:28 |
<mutante> |
fixing duplicate UID issue on stat1 for maryana |
[production] |
00:57 |
<LeslieCarr> |
asw-c-eqiad unreachable due to lacp issue |
[production] |
00:32 |
<mutante> |
fixing fenari permissions for gwicke.. (pre-puppet age UID) |
[production] |
00:26 |
<LeslieCarr> |
starting upgrade of asw-c-eqiad.mgmt - connectivity to row c machines may be affected |
[production] |
00:11 |
<notpeter> |
temp stopping lsave on es1009 and es1010 for upcoming networking downtime |
[production] |
2012-12-17
§
|
23:12 |
<LeslieCarr> |
restarted pybal on lvs1001-1003 in order to restart their bgp peering |
[production] |
22:44 |
<notpeter> |
taking fenari down for upgarde to precise (not upgrading, not reimaging) |
[production] |
22:37 |
<LeslieCarr> |
cr1-eqiad being upgraded and rebooted |
[production] |
22:30 |
<Ryan_Lane> |
labstore1 is locked up, powercycling |
[production] |
22:29 |
<Nemo_bis> |
en.wiki job queue spiked from 1 to 3 millions in last 3 hours |
[production] |
21:22 |
<Ryan_Lane> |
rebooting labstore4 |
[production] |
21:15 |
<reedy> |
synchronized php-1.21wmf6/includes/filebackend/FSFileBackend.php |
[production] |
21:11 |
<Ryan_Lane> |
rebooting labstore3 |
[production] |
21:03 |
<Ryan_Lane> |
rebooting labstore2 |
[production] |
20:48 |
<Ryan_Lane> |
restarting labstore1 |
[production] |
20:33 |
<cmjohnson1> |
auth-dns update to add internal ip's for solr1-3 |
[production] |
19:35 |
<hashar> |
regenerating Jenkins job mediawiki-core-install-sqlite |
[production] |
19:12 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.21wmf6 |
[production] |
18:16 |
<andrewbogott> |
beginning labs $HOME migration from NFS to gluster |
[production] |
16:17 |
<cmjohnson1> |
auth-dns update adding mgmt for solr1-3, solr1001-3 and internal ip's for solr1001-3 |
[production] |
13:13 |
<hashar> |
set Jenkins to use /bin/bash as a default shell (intend of /bin/sh) |
[production] |
04:02 |
<paravoid> |
killall -9 convert on imagescalers; uploading 120px generated thumbnail directly to swift |
[production] |