1601-1650 of 4803 results (12ms)
2015-07-21 §
21:48 <greg-g> Zuul not responding [releng]
20:23 <hasharConfcall> Zuul no more reports back to Gerrit due to an error with the Gerrit label [releng]
20:10 <hasharConfcall> Zuul restarted with 2.0.0-327-g3ebedde-wmf2precise1 [releng]
19:48 <hasharConfcall> Upgrading Zuul to zuul_2.0.0-327-g3ebedde-wmf2precise1 Previous version failed because python-daemon was too old, now shipped in the venv https://phabricator.wikimedia.org/T106399 [releng]
15:04 <hashar> upgraded Zuul on gallium from zuul_2.0.0-306-g5984adc-wmf1precise1_amd64.deb to zuul_2.0.0-327-g3ebedde-wmf1precise1_amd64.deb . now uses python-daemon 2.0.5 [releng]
13:37 <hashar> upgraded Zuul on gallium from zuul_2.0.0-304-g685ca22-wmf1precise1 to zuul_2.0.0-306-g5984adc-wmf1precise1 . Uses a new version of GitPython [releng]
02:15 <bd808> upgraded to elasticsearch-1.7.0.deb on deployment-logstash2 [releng]
2015-07-20 §
16:55 <thcipriani> restarted puppetmaster on deployment-salt, was acting whacky [releng]
2015-07-17 §
21:45 <hashar> upgraded nodepool to 0.0.1-104-gddd6003-wmf4 . That fix graceful stop via SIGUSR1 and let me complete the systemd integration [releng]
20:03 <hashar> stopping Zuul to get rid of a faulty registered function "build:Global-Dev Dashboard Data". Job is gone already. [releng]
2015-07-16 §
16:08 <hashar_> kept nodepool stopped on labnodepool1001.eqiad.wmnet because it spams the cron log [releng]
10:27 <hashar> fixing puppet on deployment-bastion. Stalled since July 7th - https://phabricator.wikimedia.org/T106003 [releng]
10:26 <hashar> deployment-bastion: apt-get upgrade [releng]
02:34 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224313 for scap testing [releng]
2015-07-15 §
20:53 <bd808> Added JanZerebecki as deployment-prep root [releng]
17:53 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224829/ [releng]
16:10 <bd808> sudo rm -rf /tmp/scap_l10n_* on deployment-bastion [releng]
15:33 <bd808> root (/) is full on deployment-bastion, trying to figure out why [releng]
14:39 <bd808> mkdir mira.deployment-prep:/home/l10nupdate because puppet's managehome flag doesn't seem to be doing that :( [releng]
05:00 <bd808> created mira.deployment-prep.eqiad.wmflabs to begin testing multi-master scap [releng]
2015-07-14 §
00:45 <bd808> /srv/deployment/scap/scap on deployment-mediawiki02 had corrupt git cache info; moved to scap-corrupt and forced a re-sync [releng]
00:41 <bd808> trebuchet deploy of scap to mediawiki02 failed. investigating [releng]
00:41 <bd808> Updated scap to d7db8de (Don't assume current l10n cache files are .cdb) [releng]
2015-07-13 §
20:44 <thcipriani> might be some failures, puppetmaster refused to stop as usual, had to kill pid and restart [releng]
20:39 <thcipriani> restarting puppetmaster on deployment-salt, seeing weird errors on instances [releng]
10:24 <hashar> pushed mediawiki/ruby/api tags for versions 0.4.0 and 0.4.1 [releng]
10:12 <hashar> deployment-prep: killing puppetmaster [releng]
10:06 <hashar> integration: kicking puppet master. It is stalled somehow [releng]
2015-07-11 §
04:35 <bd808> Updated /var/lib/git/labs/private to latest upstream [releng]
04:18 <bd808> Logstash cluster upgrade complete! Kibana working again [releng]
04:17 <bd808> Upgraded Elasticsearch to 1.6.0 on logstash1006; replicas recovering now [releng]
03:54 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224219/ [releng]
03:54 <bd808> fixed rebase conflict with "Enable firejail containment for zotero" by removing stale cherry-pick [releng]
2015-07-10 §
16:12 <hashar> nodepool puppitization going on :-D [releng]
03:01 <legoktm> deploying https://gerrit.wikimedia.org/r/223992 [releng]
2015-07-09 §
22:16 <hashar> integration: pulled labs/private.git : dbef45d..d41010d [releng]
2015-07-08 §
23:17 <bd808> Kibana functional again. Imported some dashboards from prod instance. [releng]
22:48 <marxarelli> cherry-picked https://gerrit.wikimedia.org/r/#/c/223691/ on integration-puppetmaster [releng]
22:33 <bd808> about half of the indices on deployment-logstash2 lost. I assume it was caused by shard rebalancing to logstash1 that I didn't notice before I shut it down and deleted it :( [releng]
22:32 <bd808> Upgraded elasticsearch on logstash2 to 1.6.0 [releng]
22:00 <bd808> Kibana messed up. Half of the logstash elasticsearch indices are gone from deployment-logstash2 [releng]
21:05 <legoktm> deployed https://gerrit.wikimedia.org/r/223669 [releng]
11:47 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/223530 [releng]
09:26 <hashar> upgraded plugins on jenkins and restarting it [releng]
2015-07-07 §
23:58 <bd808> updated scap to 303e72e (Increment deployment stats after sync-wikiversions) [releng]
21:23 <bd808> deleted instance deployment-logstash1 [releng]
20:48 <marxarelli> cherry-picking https://gerrit.wikimedia.org/r/#/c/158016/ on deployment-salt [releng]
20:07 <bd808> Forced puppet run on deployment-restbase01; run picked up changes that should have been applied yesterday, not sure why puppet wasn't running from cron properly [releng]
19:58 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/223391/ [releng]
18:51 <bd808> restarted puppetmaster on deployment-salt to pick up logging config changes [releng]