1201-1250 of 3693 results (14ms)
2015-03-23 §
23:11 <legoktm> deleting mwext-*-qunit* workspaces on gallium, shouldn't be needed [releng]
23:07 <legoktm> deleting mwext-*-lint workspaces on gallium, shouldn't be needed [releng]
23:00 <legoktm> lanthanum is now online again, with 13G free disk space [releng]
22:58 <legoktm> deleting mwext-*-qunit* workspaces on lanthanum, shouldn't be needed any more [releng]
22:54 <legoktm> deleting mwext-*-qunit-mobile workspaces on lanthanum, shouldn't be needed any more [releng]
22:48 <legoktm> deleting mwext-*-lint workspaces on lanthanum, shouldn't be needed any more [releng]
22:45 <legoktm> took lanthanum offline in jenkins [releng]
20:59 <bd808> Last log copied from #wikimedia-labs [releng]
20:58 <bd808> 20:41 cscott deployment-prep updated OCG to version 11f096b6e45ef183826721f5c6b0f933a387b1bb [releng]
20:41 <cscott> updated OCG to version 11f096b6e45ef183826721f5c6b0f933a387b1bb [releng]
19:28 <YuviPanda> created staging-rdb01.eqiad.wmflabs [releng]
19:19 <YuviPanda> disabled puppet on staging-palladium to test a puppet patch [releng]
18:41 <legoktm> deploying https://gerrit.wikimedia.org/r/198762 [releng]
13:11 <hashar> and I restarted qa-morebots a minute or so ago (see https://wikitech.wikimedia.org/wiki/Morebots#Example:_restart_the_ops_channel_morebot ) [releng]
13:11 <hashar> Jenkins: deleting unused jobs mwext-.*-phpcs-HEAD and mwext-.*-lint [releng]
2015-03-21 §
17:53 <legoktm> deployed https://gerrit.wikimedia.org/r/198503 [releng]
00:02 <Krinkle> Reestablished Jenkins-Gearman connection [releng]
2015-03-20 §
23:08 <marxarelli> Reloading Zuul to deploy I693ea49572764c96f5335127902404167ca86487 [releng]
22:50 <marxarelli> Running `jenkins-jobs update` to create job mediawiki-vagrant-bundle17-yard-publish [releng]
19:00 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/198276 [releng]
17:17 <Krinkle> Reloading Zuul to deploy I5edff10a4f0 [releng]
12:32 <mobrovac> deployment-salt ops/puppet: un-cherry-picked I48b1a139b02845c94c85cd231e54da67c62512c9 [releng]
12:30 <mobrovac> deployment-prep disabled puppet on deployment-restbase[1,2] until https://gerrit.wikimedia.org/r/#/c/197662/ is merged [releng]
08:36 <mobrovac> deployment-salt ops/puppet: cherry-picking I48b1a139b02845c94c85cd231e54da67c62512c9 [releng]
04:57 <legoktm> deployed https://gerrit.wikimedia.org/r/198184 [releng]
00:21 <legoktm> deployed https://gerrit.wikimedia.org/r/198161 [releng]
00:14 <legoktm> deployd https://gerrit.wikimedia.org/r/198160 [releng]
2015-03-19 §
23:59 <legoktm> deployed https://gerrit.wikimedia.org/r/198154 [releng]
21:48 <hashar> Jenkins: depooled/repooled lanthanum slave, it was no more processing any jobs. [releng]
14:09 <hashar> Further updated our JJB fork to upstream commit 4bf020e07 which version 1.1.0-3 [releng]
13:22 <hashar> refreshed our JJB fork 7ad4386..8928b66 . No difference in our jobs. [releng]
11:25 <hashar> refreshing configuration of all beta* jenkins jobs [releng]
06:18 <legoktm> deployed https://gerrit.wikimedia.org/r/197860 & https://gerrit.wikimedia.org/r/197858 [releng]
05:20 <legoktm> deleting 'mediawiki-ruby-api-bundle-*' 'mediawiki-selenium-bundle-*' 'mwext-*-bundle-*' jobs [releng]
05:06 <legoktm> deployed https://gerrit.wikimedia.org/r/197853 [releng]
00:57 <Krinkle> Reloading Zuul to deploy Ie1d7bf114b34f9 [releng]
2015-03-18 §
17:52 <legoktm> deployed https://gerrit.wikimedia.org/r/197674 and https://gerrit.wikimedia.org/r/197675 [releng]
17:27 <legoktm> deployed https://gerrit.wikimedia.org/r/197651 [releng]
15:20 <hashar> setting gallium # of executors from 5 back to 3. When jobs run on it that slowdown the zuul scheduler and merger! [releng]
15:06 <legoktm> deployed https://gerrit.wikimedia.org/r/194990 [releng]
13:45 <mobrovac> added restbase security group [releng]
13:35 <YuviPanda> made mobrovac projectadmin [releng]
13:34 <YuviPanda> added mobrovac to project [releng]
02:02 <bd808> Updated scap to I58e817b (Improved test for content preceeding <?php opening tag) [releng]
01:48 <marxarelli> memory usage, swap, io wait seem to be back to normal on deployment-salt and kill/start of puppetmaster [releng]
01:45 <marxarelli> kill 9'd puppetmaster processes on deployment-salt after repeated attempts to stop [releng]
01:28 <marxarelli> restarting salt master on deployment-salt [releng]
01:20 <marxarelli> deployment-salt still unresponsive, lot's of io wait (94%) + swapping [releng]
00:32 <marxarelli> seeing heavy swapping on deployment-salt; puppet processes using 250M+ memory each [releng]
2015-03-17 §
21:42 <YuviPanda> recreated staging-sca01, let’s wait and see if it just automagically configures itself :) [releng]