9551-9600 of 10000 results (39ms)
2015-04-29 §
14:11 <hashar> rebooting integration-saltmaster stalled. [releng]
13:11 <hashar> Rebooting deployment-parsoid05 via wikitech interface. [releng]
13:02 <hashar> labvirt1005 seems to have hardware issue. Impacts a bunch of beta cluster / integration instances as listed on https://phabricator.wikimedia.org/T97521#1245217 [releng]
12:22 <hashar> deployment-parsoid05 slow down is https://phabricator.wikimedia.org/T97421 . Running apt-get upgrade and rebooting it but its slowness issue might be with the underlying hardware [releng]
12:13 <hashar> killing puppet on deployment-parsoid05 eats all CPU for some reason [releng]
02:40 <legoktm> deploying https://gerrit.wikimedia.org/r/207363 and https://gerrit.wikimedia.org/r/207368 [releng]
2015-04-28 §
23:33 <hoo> Ran foreachwiki extensions/Wikidata/extensions/Wikibase/lib/maintenance/populateSitesTable.php --load-from 'http://meta.wikimedia.beta.wmflabs.org/w/api.php' to fix all sites tables [releng]
23:18 <hoo> Ran mysql> INSERT INTO sites (SELECT * FROM wikidatawiki.sites); on enwikinews to populate the sites table [releng]
23:18 <hoo> Ran mysql> INSERT INTO sites (SELECT * FROM wikidatawiki.sites); on testwiki to populate the sites table [releng]
20:57 <YuviPanda> KILL KILL KILL DEPLOYMENT-LUCID-SALT WITH FIRE AND BRIMSTONE AND BAD THINGS [releng]
17:48 <James_F> Restarting grrrit-wm for config change. [releng]
16:24 <bd808> Updated scap to ef15380 (Make scap localization cache build $TMPDIR aware) [releng]
15:42 <bd808> Freed 5G on deployment-bastion by deleting abandoned /tmp/scap_l10n_* directories [releng]
14:01 <marxarelli> reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/206967/ [releng]
00:17 <greg-g> after the 3rd or so time doing it (while on the Golden Gate Bridge, btw) it worked [releng]
00:11 <greg-g> still nothing... [releng]
00:10 <greg-g> after disconnecting, marking temp offline, bringing back online, and launching slave agent: "Slave successfully connected and online" [releng]
00:07 <greg-g> deployment-bastion is idle, yet we have 3 pending jobs waiting for an executer on it - will disconnect/reconnect it in Jenkins [releng]
2015-04-27 §
21:45 <bd808> Manually triggered beta-mediawiki-config-update-eqiad for zuul build df1e789c726ad4aae60d7676e8a4fc8a2f6841fb [releng]
21:20 <bd808> beta-scap-equad job green again after adding a /srv/ disk to deployment-jobrunner01 [releng]
21:08 <bd808> Applied role::labs::lvm::srv on deployment-jobrunner01 and forced puppet run [releng]
21:08 <bd808> Deleted deployment-jobrunner01:/srv/* in preparation for applying role::labs::lvm::srv [releng]
21:06 <bd808> deployment-jobrunner01 missing role::labs::lvm::srv [releng]
21:00 <bd808> Root partition full on deployment-jobrunner01 [releng]
20:53 <bd808> removed mwdeploy user from deployment-bastion:/etc/passwd [releng]
20:15 <Krinkle> Relaunched Gearman connection [releng]
19:53 <Krinkle> Jenkins unable to re-create Gearman connection. (HTTP 503 error from /configure). Have to force restart Jenkins [releng]
17:32 <Krinkle> Relauch slave agent on deployment-bastion [releng]
17:31 <Krinkle> Jenkins slave deployment-bastion deadlock waiting for executors [releng]
08:01 <_joe_> installed hhvm 3.6 on deployment-mediawiki02 [releng]
2015-04-26 §
06:09 <thcipriani|afk> rm scap l10nfiles from /tmp on deployment-bastion root partition 100% again... [releng]
2015-04-25 §
16:00 <thcipriani|afk> manually ran logrotate on deployment-jobrunner01, root partition at 100% [releng]
15:16 <thcipriani|afk> clear /tmp/scap files on deployment-bastion, root partition at 100% [releng]
2015-04-24 §
18:10 <Krinkle> cvn Promited Rxy from member to projectadmin [releng]
18:01 <thcipriani> ran sudo chown -R mwdeploy:mwdeploy /srv/mediawiki on deployment-bastion to fix beta-scap-eqiad, hopefully [releng]
17:26 <thcipriani> remove deployment-prep from domain in /etc/puppet/puppet.conf on deployment-stream, puppet now OK [releng]
17:20 <thcipriani> rm stale lock on deployment-rsync01, puppet fine [releng]
17:10 <thcipriani> gzip /var/log/account/pacct.0 on deployment-bastion: ought to revisit logrotate on that instance. [releng]
17:00 <thcipriani> rm stale /var/lib/puppet/state/agent_catalog_run.lock on deployment-kafka02 [releng]
14:25 <_joe_> installing hhvm 3.6.1 on mediawiki-deployment01 [releng]
2015-04-23 §
17:19 <andrewbogott> rebooting deployment-parsoidcache02 because it seems troubled [releng]
06:11 <Krinkle> Running git-cache-update inside screen on integration-slave-trusty-1021 at /mnt/git [releng]
06:11 <Krinkle> integration-slave-trusty-1021 stays depooled (see T96629 and T96706) [releng]
04:35 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/206044 and https://gerrit.wikimedia.org/r/206072 [releng]
00:29 <bd808> cherry-picked and applied https://gerrit.wikimedia.org/r/#/c/205969/ (logstash: Convert $::realm switches to hiera) [releng]
00:17 <bd808> beta cluster fatal monitor full of "Bad file descriptor: AH00646: Error writing to /data/project/logs/apache-access.log" [releng]
00:03 <bd808> cleaned up redis leftovers on deployment-logstash1 [releng]
2015-04-22 §
23:57 <bd808> cherry-picked and applied https://gerrit.wikimedia.org/r/#/c/205968 (remove redis from logstash) [releng]
23:33 <bd808> reset deployment-salt:/var/lib/git/operations/puppet HEAD to production; forced update with upstream; re-cherry-picked I46e422825af2cf6f972b64e6d50040220ab08995 [releng]
23:29 <bd808> deployment-salt:/var/lib/git/operations/puppet in detached HEAD state; looks to be for cherry pick of I46e422825af2cf6f972b64e6d50040220ab08995 ? [releng]