2012-07-23
§
|
16:03 |
<hashar> |
hopefully half fixed the udp2log on deployment-dbdump . Need several changes in the puppet files though cause the udp2log-mw init script seems to conflict with the udp2log one :/ |
[releng] |
13:41 |
<hashar> |
rebooting -dbdump to make sure everything works fine :D |
[releng] |
13:40 |
<hashar> |
udp2log restored on beta!!! Still in /home/wikipedia/logs/ and logged by deployment-dbdump |
[releng] |
13:11 |
<hashar> |
applying role::logging::mediawiki to -dbdump (will bring log2udp) |
[releng] |
09:13 |
<hashar> |
updating MediaWiki extensions |
[releng] |
09:11 |
<hashar> |
updated mediawiki/core: Updating ef3132f..f8de6a7 |
[releng] |
09:10 |
<hashar> |
updating core + extensions to their lastest master versions |
[releng] |
09:09 |
<hashar> |
Updated mediawiki-config Updating 96ba09e..66ca8b0 |
[releng] |
2012-07-18
§
|
14:38 |
<hashar> |
New / rebooted instances are no more accessible : {{bug|38473}} - instances can not boot / reboot anymore |
[releng] |
13:46 |
<hashar> |
deleting upload01 (screwed somehow) |
[releng] |
13:46 |
<hashar> |
creating deployment-cache-upload03 to replace upload01 |
[releng] |
13:44 |
<hashar> |
deployment-cache-upload01 seems screwed : waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id . Failed DHCP acquisition ? => rebooting |
[releng] |
13:08 |
<hashar> |
deployment-cache-upload01 : running apt-get upgrade / dist-upgrade and rebooting |
[releng] |
10:22 |
<hashar> |
copying apache dir to /data/project . Run as root@deployment-nfs-memc in a screen session |
[releng] |
09:02 |
<hashar> |
adding nfs::apache::labs and nfs::upload::labs to deployment-integration |
[releng] |
08:59 |
<hashar> |
Applying {{gerrit|15545}} to deployment-integration |
[releng] |
08:26 |
<hashar> |
Created deployment-integration to be used as a puppetmaster::self host |
[releng] |
2012-07-17
§
|
21:26 |
<Platonides> |
installed python-imaging and wamerican on deployment-dbdump |
[releng] |
21:12 |
<beta-logmsgbot> |
petrb: updating ArticleFeedbackv5 extension |
[releng] |
19:51 |
<hashar> |
369 languages rebuilt out of 369 |
[releng] |
19:45 |
<hashar> |
rebuilding l10n cache: mwscript rebuildLocalisationCache.php --wiki=aawiki --threads=2 |
[releng] |
19:39 |
<hashar> |
beta broken by PAGEID magic word introduced with 0a7cf03 / I11d42ca7 {{gerrit|9858}} |
[releng] |
19:32 |
<hashar> |
running git bisect of core 80fbb70..ef3132f |
[releng] |
19:26 |
<hashar> |
upgrading MediaWiki core 80fbb70..ef3132f |
[releng] |
19:20 |
<hashar> |
updated AFTv5: f97811f..d3bd97f |
[releng] |
19:04 |
<hashar> |
updated robots.txt to specify a user-agent. Will definitely prevents Google from killing beta :) |
[releng] |
18:54 |
<hashar> |
squid resumed. The swap files got corrupted somehow, needed to delete them entirely to start again. Squud storing again. |
[releng] |
18:40 |
<hashar> |
-squid bah doing rm -fR /data/project/squid1/* |
[releng] |
18:39 |
<hashar> |
installed `tree` on deployment-squid |
[releng] |
18:38 |
<hashar> |
removing swap files in /data/project/squid1 |
[releng] |
18:36 |
<hashar> |
Squid is bugged as hell : 2012/07/17 18:36:13| Store rebuilding is -0.1% complete and looping |
[releng] |
18:21 |
<beta-logmsgbot> |
hashar: rebooting squid glusterfs gone wild apparently |
[releng] |
14:45 |
<hashar> |
Blacklisted user agents matchin /.*Googlebot.*/ |
[releng] |
13:45 |
<hashar> |
Manually restarted apaches |
[releng] |
13:44 |
<hashar> |
Imported all.conf apache conf from production |
[releng] |
13:29 |
<hashar> |
err: /Stage[main]/Mediawiki::Sync/Exec[mw-sync]: Failed to call refresh: Command exceeded timeout at /etc/puppet/manifests/mediawiki.pp:24 |
[releng] |
13:28 |
<hashar> |
All apaches are dead :/ |
[releng] |
09:26 |
<hashar> |
Adding class role::applicationserver::jobrunner |
[releng] |
09:20 |
<hashar> |
sync upload6 dirs again. root@deployment-nfs-memc:$ rsync -a --progress --inplace /mnt/export/upload6 /data/project/upload6 |
[releng] |