2011-07-07
§
|
20:47 |
<Ryan_Lane> |
lowering carp weight of sq79 and 82 |
[production] |
20:45 |
<Ryan_Lane> |
repooling sq79 and sq82 in lvs |
[production] |
20:44 |
<Ryan_Lane> |
temporarily depooling sq79 and sq82 |
[production] |
20:00 |
<reedy> |
synchronized php-1.17/wmf-config/InitialiseSettings.php 'bug 29360' |
[production] |
18:47 |
<Ryan_Lane> |
adding another IP to that search block |
[production] |
18:38 |
<Ryan_Lane> |
deploying new squid conf with a blocked ip, to block a search spammer |
[production] |
18:04 |
<demon> |
synchronized php-1.17/wmf-config/InitialiseSettings.php 'More GNSM: had to set comment namespace and fallback category' |
[production] |
17:59 |
<demon> |
synchronized php-1.17/wmf-config/InitialiseSettings.php 'GNSM for fawikinews: bug 29563' |
[production] |
17:52 |
<Ryan_Lane> |
that's powering |
[production] |
17:52 |
<Ryan_Lane> |
powing off srv154 |
[production] |
17:51 |
<demon> |
synchronized php-1.17/extensions/GoogleNewsSitemap/GoogleNewsSitemap.alias.php |
[production] |
16:39 |
<reedy> |
synchronized php-1.17/wmf-config/InitialiseSettings.php 'bug 27840, moving wiki' |
[production] |
15:45 |
<mark> |
Established full iBGP mesh between cr1-sdtpa, csw5-pmtpa, cr1-eqiad, cr2-eqiad (NOT csw1-sdtpa) |
[production] |
00:25 |
<awjr> |
disabling fundraising-related squid log filtering for udp2log |
[production] |
2011-07-06
§
|
23:45 |
<Ryan_Lane> |
powercycling srv281 |
[production] |
23:44 |
<Ryan_Lane> |
powercycling srv266 |
[production] |
23:44 |
<Ryan_Lane> |
powercycling srv217 |
[production] |
23:43 |
<Ryan_Lane> |
powercycling srv206 |
[production] |
23:42 |
<Ryan_Lane> |
powercycling srv154 |
[production] |
23:40 |
<Ryan_Lane> |
powercycling srv276, it's dead |
[production] |
22:56 |
<mark> |
Setup cr1-sdtpa with initial config; connected to csw5-pmtpa (via L2 csw1-sdtpa); OSPF up |
[production] |
22:36 |
<RobH> |
wmf_ops: you can disregard the humidity alarms for eqiad that are spamming alerts to email. eq confirms no humidity issue on site and I will investigate the actual sensors this friday |
[production] |
22:21 |
<Ryan_Lane> |
repooling srv169, it was missing the wikimedia-lvs-realserver package. fixed in puppet |
[production] |
22:04 |
<Ryan_Lane> |
rebooting srv169 |
[production] |
21:59 |
<Ryan_Lane> |
depooling srv169 |
[production] |
21:59 |
<Ryan_Lane> |
sync'd srv169, repooling to test |
[production] |
21:54 |
<Ryan_Lane> |
depooling srv169 |
[production] |
21:51 |
<Ryan_Lane> |
restarted apache on singer |
[production] |
21:46 |
<laner> |
synchronized php-1.17/wmf-config/db.php 'depooling srv154, repooling srv178' |
[production] |
21:30 |
<pdhanda> |
ran sync-common-all 'Synced to r91606 for ArticleFeedback' |
[production] |
21:27 |
<Ryan_Lane> |
setting proxy setting for secure back to original setting. removed ~ files from sites-enabled |
[production] |
21:17 |
<Ryan_Lane> |
putting retry=3 back, and adding a timeout of 15 seconds to secure |
[production] |
21:16 |
<Ryan_Lane> |
removed retry=3 from ProxyPass directive for secure. 3 seconds really isn't enough for this service... |
[production] |
21:06 |
<RobH> |
running puppet on spence, this is going to take forever. |
[production] |
21:05 |
<Ryan_Lane> |
restarting apache on singer |
[production] |
19:37 |
<mark> |
Added DNS entries for cr1-sdtpa and cr2-pmtpa |
[production] |
19:25 |
<hashar:> |
hexmode raised an user issue with blocking. It is a lock wait timeout happening from time to time on enwiki. 30 occurences in dberror.log for Block::purgeExpired. Could not reproduce it so I am assuming it was temporary issue. |
[production] |
19:15 |
<hashar:> |
srv154 seems unreachable. dberror.log is spammed with "Error connecting to <srv154 IP>" |
[production] |
19:13 |
<RobH> |
added webmaster@ to other top level domain mail routing to forward to the wikimedia.org webmaster for google securebrowsing stuff per RT#1122 |
[production] |
18:08 |
<pdhanda> |
running maintenance/cleanupTitles.php on commonswiki |
[production] |
17:51 |
<pdhanda> |
Running maintenances/namespaceDupesWT.php on commonswiki |
[production] |
17:12 |
<RobH> |
srv169 successfully back in service, tests fine and has all updated files, lvs3 updated to include it in pool |
[production] |
17:11 |
<RobH> |
returning srv169 into service |
[production] |
15:37 |
<mark> |
Removed ms5:/etc/cron.d/mdadm |
[production] |
15:37 |
<mark> |
Stopped MD raid resync on ms5 |
[production] |
15:28 |
<RobH> |
search18 booted back up successfully |
[production] |
15:25 |
<RobH> |
api lag issues known due to search server failure, being worked presently |
[production] |
15:24 |
<RobH> |
search18 sas configuration bios confirms both disks are still in a non-degraded (according to it) mirror |
[production] |
15:23 |
<RobH> |
search18 randomly rebooted after checking disks before the login prompt |
[production] |
15:19 |
<RobH> |
rebooting search18 |
[production] |