2013-04-30
§
|
20:08 |
<hashar> |
migrating apache32 to new NFS server |
[releng] |
20:06 |
<hashar> |
root@deployment-bastion:~# /etc/init.d/udp2log stop && /etc/init.d/udp2log-mw start |
[releng] |
20:01 |
<hashar> |
applying role::labsnfs::client on -bastion |
[releng] |
19:45 |
<hashar> |
applying the very recent `role::labsnfs::client` class on deployment-integration |
[releng] |
19:43 |
<hashar> |
Upgraded puppet manifests on deployment-integration and running puppet. |
[releng] |
19:21 |
<hashar> |
Migrating homes to the new NFS server |
[releng] |
18:27 |
<hashar> |
rsync to the NFS server are completed. There are most probably still some tiny files than need to be copied though |
[releng] |
16:46 |
<hashar> |
Mounted new NFS server on /srv/project on instances: apache32, apache33, video05 and jobrunner08 |
[releng] |
16:01 |
<hashar> |
Clearing out years old backup from /data/project such as copy of extensions, databases dumps and some old instances backups. |
[releng] |
15:28 |
<hashar> |
Copying l10n cache to the new NFS server: rsync -av /home/wikipedia/common/php-master/cache /srv/project/apache/common/php-master |
[releng] |
15:11 |
<hashar> |
syncing upload data from the Gluster share to labnfs server: rsync -avv /data/project/upload7 /srv/project |
[releng] |
13:59 |
<hashar> |
bastion: created NFS mount point thanks to Coren. echo 1 >/sys/module/nfs/parameters/nfs4_disable_idmapping ; mount -t nfs -o nfsvers=4,port=0,hard,rsize=65535,wsize=65536 labnfs.pmtpa.wmnet:/deployment-prep/project /srv/project |
[releng] |
12:41 |
<hashar> |
Refreshed most extensions and running mw-update-l10n |
[releng] |
2013-04-19
§
|
19:50 |
<hashar> |
The l10n cache was stalled since Mar 22 13:08 at least. The files were owned by `mwdeploy` seems something changed and they are now owned by `l10nupdate` So I ran: chown l10nupdate -R /home/wikipedia/common/php-master/cache/l10n/ |
[releng] |
19:46 |
<hashar> |
Attempted to update the l10n cache (sudo -u mwdeploy mw-update-l10n ) got a permission deny on /home/wikipedia/common/php-master/cache/l10n |
[releng] |
19:43 |
<hashar> |
Gluster is broken on beta. Extensions are no more updating nor the l10n update can run. {{bug|47425}} |
[releng] |
19:38 |
<hashar> |
root@deployment-bastion:~# /etc/init.d/udp2log stop && /etc/init.d/udp2log-mw start |
[releng] |
19:37 |
<hashar> |
Rebooting bastion. Seems GlusterFS can not allocate memory ( {{bug|47425}} ) |
[releng] |
19:18 |
<hashar> |
manually updating mediawiki extensions |
[releng] |
11:52 |
<hashar> |
Successfully added Mark Bergsma to deployment-prep. |
[releng] |
09:00 |
<hashar> |
Updating puppet repositories on search01 and searchidx01. Running puppet on both of them. |
[releng] |
2013-04-10
§
|
18:24 |
<^demon|sick> |
ran mergeMessageList.php for php-master wikis |
[releng] |
13:33 |
<hashar> |
Restarted the database updating job https://integration.wikimedia.org/ci/job/beta-update-databases/374/ |
[releng] |
13:32 |
<hashar> |
switching udp2log on bastion: /etc/init.d/udp2log stop && /etc/init.d/udp2log-mw start (see {{bug|38995}} ) |
[releng] |
13:31 |
<hashar> |
rebooting deployment-bastion too : gluster issue |
[releng] |
13:26 |
<hashar> |
Cluster is back up :-] |
[releng] |
13:25 |
<hashar> |
rebooting both apaches. |
[releng] |
13:24 |
<hashar> |
Gluster failure again /data/project/apache/conf/ has some files missing: www.wikipedia.conf en2.conf wikimedia.conf |
[releng] |
13:23 |
<hashar> |
apache2: Syntax error on line 324 of /etc/apache2/apache2.conf: Syntax error on line 9 of /etc/apache2/wmf/all.conf: Could not open configuration file /etc/apache2/wmf/www.wikipedia.conf: No such file or directory |
[releng] |
13:20 |
<hashar> |
apt-get upgraded apache32 and apache33 . Note that apache is down on them. |
[releng] |
13:19 |
<hashar> |
no pages being served. Most probably a PHP Fatal error |
[releng] |
13:13 |
<hashar> |
reran Jenkins job https://integration.wikimedia.org/ci/job/beta-mediawiki-config-update/ . Some git failures happened in /home/wikipedia/common . |
[releng] |
06:45 |
<hashar> |
searchidx01 : restarted lucene-search-2 might have been killed by OOM killer (see {{bug|46459}} |
[releng] |
06:39 |
<hashar> |
search01 : restarted lucene-search-2 , was not listening on port 8123. |
[releng] |