201-250 of 1813 results (8ms)
2014-08-18 §
17:47 <godog> upgraded hhvm on mediawiki02 to 3.3-dev+20140728+wmf5 [releng]
17:44 <bd808> Trying to restart minions again with `salt '*' -b 1 service.restart salt-minion` [releng]
17:39 <bd808> Restarting minions via `salt '*' service.restart salt-minion` [releng]
17:38 <bd808> Restarted salt-master service on deployment-salt [releng]
17:19 <bd808> 16:37 Restarted Apache and HHVM on deployment-mediawiki02 to pick up removal of /etc/php5/conf.d/mail.ini (logged in prod SAL by mistake) [releng]
16:59 <manybubbles|lunc> upgrading Elasticsearch in beta to 1.3.2 [releng]
16:11 <bd808> Manually applied https://gerrit.wikimedia.org/r/#/c/141287/12/templates/mail/exim4.minimal.erb on deployment-mediawiki02 and restarted exim4 service [releng]
15:28 <bd808> Puppet failing for deployment-mathoid due to duplicate definition error in trebuchet config [releng]
15:15 <bd808> Reinstated puppet patch to depool deployment-mediawiki01 and forced puppet run on all deployment-cache-* hosts [releng]
15:04 <bd808> Puppet run failing on deployment-mediawiki01 (apache won't start); Puppet disabled on deployment-mediawiki02 ('reason not specified') Probably needs to wait until Giuseppe is back from vacation for fixing. [releng]
15:00 <bd808> Rebooting deployment-eventlogging02 via wikitech; console filling with OOM killer messages and puppet runs failing with "Cannot allocate memory - fork(2)" [releng]
14:29 <bd808> Forced puppet run on deployment-cache-upload02 [releng]
14:27 <bd808> Forced puppet run on deployment-cache-text02 [releng]
14:24 <bd808> Forced puppet run on deployment-cache-mobile03 [releng]
14:20 <bd808> Forced puppet run on deployment-cache-bits01 [releng]
2014-08-17 §
22:58 <bd808> Attempting to reboot deployment-cache-bits01.eqiad.wmflabs via wikitech [releng]
22:56 <bd808> deployment-cache-bits01.eqiad.wmflabs not allowing ssh access and wikitech console full of OOM killer messages [releng]
2014-08-15 §
21:57 <legoktm> set $wgVERPsecret in PrivateSettings.php [releng]
21:42 <hashSpeleology> Beta cluster database updates are broken due to CentralNotice. Fix up is {{gerrit|154231}} [releng]
20:57 <hashSpeleology> deployment-rsync01 : deleting /usr/local/apache/common-local content. Then ln -s /srv/common-local /usr/local/apache/common-local as set by beta::common which is not applied on that host for some reason. {{bug|69590}} [releng]
20:55 <hashSpeleology> puppet administratively disabled on mediawiki02 . Assuming some work in progress on that host. Leaving it untouched [releng]
20:54 <hashSpeleology> puppet is proceeding on mediawiki01 [releng]
20:52 <hashSpeleology> attempting to unbreak mediawiki code update {{bug|69590}} by cherry picking {{gerrit|154329}} [releng]
20:39 <hashSpeleology> in case it is not in SAL. MediaWiki is no more synced to app server {{bug|69590}} [releng]
20:20 <hashSpeleology> rebooting mediawiki01 , /var refuses to clear out and stick at 100% usage [releng]
20:16 <hashSpeleology> cleaning up /var/log on deployment-mediawiki02 [releng]
20:14 <hashSpeleology> on deployment-mediawiki01 deleting /var/log/apache2/access.log.1 [releng]
20:13 <hashSpeleology> on deployment-mediawiki01 deleting /var/log/apache2/debug.log.1 [releng]
20:13 <hashSpeleology> bunch of instances have a full /var/log :-/ [releng]
11:37 <ori> deployment-cache-bits01 unresponsive; console shows OOMs: https://dpaste.de/LDRi/raw . rebooting [releng]
03:20 <jeremyb> 02:46:37 UTC <ebernhardson> !log beta /dev/vda1 full. moved /srv-old to /mnt/srv-old and freed up 2.1G [releng]
2014-08-14 §
12:23 <hashar> manually rebased operations/puppet.git on puppetmaster [releng]
2014-08-13 §
08:02 <hashar> beta-code-update-eqiad is running again [releng]
07:57 <hashar> fixing ownerships under /srv/scap-stage-dir/php-master/skins some files belong to root [releng]
07:55 <hashar> https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/ is broken :-/ [releng]
2014-08-08 §
16:05 <bd808> Fixed merge conflict that was preventing updates on puppet master [releng]
2014-08-06 §
13:13 <hashar> https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/ is running again [releng]
13:13 <hashar> removed a bunch of local hack on deployment-bastion:/srv/scap-stage-dir/php-master . That causes the git repo to be dirty and prevents scap from achieving git pull there [releng]
12:08 <hashar> Manually pruning whole text cache on deployment-cache-text02 [releng]
12:07 <hashar> Apache virtual hosts were not properly loaded on mediawiki02. I have hacked /etc/apache2/apache2.conf to make it Include Include /usr/local/apache/conf/all.conf (instead of main.conf which does not include everything) [releng]
08:43 <hashar> prunning cache on deployment-cache-text02 / restarting varnish [releng]
2014-08-02 §
08:53 <swtaarrs> rebuilt and restarted hhvm on deployment-mediawiki02 with potential fix [releng]
05:17 <swtaarrs> restarted hhvm on deployment-mediawiki0{1,2} to unwedge them [releng]
2014-08-01 §
15:03 <bd808> Updated cherry-pick of Iceb8f43 [releng]
15:02 <bd808> Cleaned up puppet repo on deployment-salt; merge conflicts with local Ia463120 hack; reapplied depool of deployment-mediawiki01 [releng]
14:50 <bd808> Restarted stuck hhvm on deployment-mediawiki02; apache had 89 children waiting for a response [releng]
13:27 <godog> changed inplace bt-hhvm on deployment-mediawiki01/02 to also copy the binary [releng]
05:32 <ori> depooled deployment-mediawiki02 to investigate HHVM lock-up by cherry-picking I7df8c5310 on beta. [releng]
00:40 <ori> disabled puppet on deployment-mediawiki{01,02} and enabled verbose apache logging [releng]
2014-07-31 §
22:41 <bd808> Restarted hhvm on -mediawiki{01,02}. Brett looked at 01 before I did and said "it's the same as before" [releng]