51-100 of 3486 results (10ms)
2015-09-02 §
16:34 <hashar_> nodepool .deb package bumped to 0.1.1 . Pending upload to apt.wikimedia.org [releng]
16:34 <hashar_> nodepool database backend has been setup by ops (thank you Jaime) [releng]
16:33 <hashar_> ping [releng]
15:09 <hashar> bumping operations/debs/nodepool upstream branch from 0.1.0 to 0.1.1 ( 462cbe9..3c635ec ) [releng]
2015-09-01 §
21:08 <hashar> marxarelli properly build a CI image using diskimage-builder \O/ [releng]
20:01 <dapatrick> Starting scans/spidering on integration-mediawiki03 [releng]
01:12 <James_F> Re-restarting grrrit-wm rolled back to 2f5de55ff75c3c268decfda7442dcdd62df0a42d [releng]
00:54 <James_F> Restarted grrrit-wm with I7eb67e3482 as well as I48ed549dc2b. [releng]
00:32 <James_F> Didn't work, rolled back grrrit-wm to 2f5de55ff75c3c268decfda7442dcdd62df0a42d. [releng]
00:32 <James_F> Didn't work, r [releng]
00:29 <James_F> Restarted grrrit-wm for I48ed549dc2b. [releng]
2015-08-31 §
15:13 <jzerebecki> did https://phabricator.wikimedia.org/T109007#1537572 [releng]
2015-08-30 §
20:53 <hashar> beta-scap-eqiad failling due to some mwdeploy not being able to ssh to other hosts. Attempted to add the ssh key again following https://phabricator.wikimedia.org/T109007#1537572 which fixed it [releng]
2015-08-29 §
01:01 <bd808> Deleted local mwdeploy user on deployment-tmh01 that was causing scap failures [releng]
00:21 <bd808> stopping and starting jobrunner and jobchron on deployment-tmh01 [releng]
2015-08-28 §
23:40 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/234699/ [releng]
20:17 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/234599 to setup new tmh01 as scap target [releng]
20:15 <bd808> restored 3 cherry picks that were lost when rebuilding the ops/puppet git repo [releng]
20:07 <bd808> deployment-puppetmaster has only one cherry-pick; looks like maybe dcausse dropped the prior stack when working on Icc95ac8 [releng]
18:17 <bd808> Cleaned up some puppet groups for deployment-prep that no longer exist in ops/puppet [releng]
18:03 <bd808> Building deployment-tmh01.deployment-prep.eqiad.wmflabs to replace deployment-videoscaler01 [releng]
18:01 <bd808> Nope, I deleted deployment-videoscaler01 [releng]
18:01 <bd808> Deleted deployment-urldownloader.deployment-prep.eqiad.wmflabs [releng]
16:53 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/234569 [releng]
11:39 <hashar> gallium: rm -fR /srv/org/wikimedia/integration/cover/mediawiki-core/master/php2 . This way https://integration.wikimedia.org/cover/mediawiki-core/ redirects to the coverage report (thanks Krinkle) [releng]
11:37 <hashar> deleting https://integration.wikimedia.org/ci/job/mediawiki-core-code-coverage-2 (same) [releng]
10:43 <hashar> pooling back integration-slave-trusty-1016 Was once depooled for debugging purposes and repealed ( https://phabricator.wikimedia.org/T110054 ) but apparently Jenkins restart did not pool it back again :/ [releng]
00:56 <thcipriani> sudo keyholder arm on deployment-bastion fixed beta-scap-eqiad [releng]
2015-08-27 §
20:37 <marxarelli> Reloading Zuul to deploy If273fceb4134e5f3e38db8361f1a355f9fcfee3a [releng]
12:52 <hashar> cleaning up old workspaces from jobs that are now throttled to one per node (ex: <tt>sudo salt '*slave*' cmd.run 'rm -fR /mnt/jenkins-workspace/workspace/mediawiki*@?'</tt> ) [releng]
08:17 <moritzm> enabled base::firewall on deployment-mediawiki0[1-3] [releng]
02:54 <matt_flaschen> Manual UPDATE for enwiki DB on Beta Cluster to work around earlier ref_src_wiki update.php problem. [releng]
02:42 <matt_flaschen> Manually fixed index on flow_ext_ref for cawiki, en_rtlwiki, enwiki, hewiki, metawiki, and testwiki on Beta Cluster due to https://gerrit.wikimedia.org/r/#/c/234162/ [releng]
00:14 <marxarelli> Reloading Zuul to deploy Iaab45d659df4b817a0dd27a7ccde17d71f630aaa [releng]
2015-08-26 §
23:39 <bd808> Updated scap to a7ec319 (Use configured bin_dir to find refreshCdbJsonFiles) [releng]
23:32 <Krenair> Re-armed keyholder on deployment-bastion [releng]
21:51 <matt_flaschen> To fix https://gerrit.wikimedia.org/r/#/c/233952/1 on Beta, manually ran: while read line; do echo "Starting $line\n"; echo 'ALTER TABLE flow_wiki_ref DROP COLUMN ref_src_wiki;' | sql --write "$line"; echo "Finished $line\n"; done < /srv/mediawiki/all-labs.dblist [releng]
16:39 <bd808> marked https://integration.wikimedia.org/ci/computer/integration-slave-precise-1014/ offline for git clone problems [releng]
16:18 <marxarelli> deleted udp2log.log and restarted service. so far nothing out of `tail -fn0 udp2log.log` [releng]
16:16 <marxarelli> stopping udp2log on deployment-flourine [releng]
16:14 <marxarelli> udp2log is mostly "egrep: writing output: Broken pipe" [releng]
16:10 <marxarelli> disk space at 97% on deployment-flourine, mainly due to 15G /var/log/udp2log/udp2log.log [releng]
16:01 <bd808> sudo rm -rf integration-slave-precise-1014:/mnt/jenkins-workspace/workspace/mediawiki-core-phplint/.git [releng]
09:57 <hashar> Bumping our JJB mirror a3aef64..f01628c Required for the Android Emulator plugin support ( https://phabricator.wikimedia.org/T110307 ) [releng]
07:39 <hashar_> puppet is back in action on beta cluster [releng]
07:38 <hashar_> enabling puppet agent on deployment-puppetmaster. It is disable with no reason given [releng]
07:24 <hashar_> resetted beta cluster puppet master to origin/production . We have lost any cherry pick that might have existed [releng]
07:16 <hashar_> started puppetmaster on deployment-puppetmaster [releng]
07:11 <hashar> puppet fails on most beta cluster instances :-( [releng]
2015-08-25 §
23:48 <thcipriani> stopping puppetmaster and disabling puppet runs on deployment-puppetmaster until we get a change to diagnose/rebuild (tomorrow) [releng]