401-450 of 5527 results (18ms)
2016-09-15 §
12:33 <elukey> added base::firewall, beta::deployaccess, mediawiki::conftool, role::mediawiki::appserver to mediawiki05 [releng]
12:20 <elukey> terminate mediawiki02 to create mediawiki05 [releng]
10:48 <hashar> beta: cherry picking moritzm patch https://gerrit.wikimedia.org/r/#/c/310793/ "Also handle systemd in keyholder script" T144578 [releng]
09:33 <hashar> T144006 sudo -u jenkins-deploy -H SSH_AUTH_SOCK=/run/keyholder/proxy.sock ssh mwdeploy@deployment-mediawiki06.deployment-prep.eqiad.wmflabs [releng]
09:10 <elukey> executed git pull and then git rebase -i on deployment puppet master [releng]
08:52 <elukey> terminated mediawiki03 and created mediawiki06 [releng]
08:45 <elukey> removed mediawiki03 from puppet with https://gerrit.wikimedia.org/r/#/c/310749/ [releng]
02:36 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/310701 [releng]
2016-09-14 §
21:37 <hashar> integration: setting "ulimit -c 2097152" on all slaves due to Zend PHP segfaulting T142158 [releng]
15:21 <godog> cherry-pick https://gerrit.wikimedia.org/r/#/c/310557/ on puppet master [releng]
14:31 <hashar> Added otto to integration labs project [releng]
13:28 <gehel> upgrading deployment-logstash2 to elasticsearch 2.3.5 - T145404 [releng]
09:27 <hashar> Deleting deployment-mediawiki01 , replaced by deployment-mediawiki04 T144006 [releng]
07:19 <legoktm> sudo salt '*trusty*' cmd.run 'service mysql start', it was down on all trusty salves [releng]
07:17 <legoktm> mysql just died on a bunch of slaves (trusty-1013, 1012, 1001) [releng]
2016-09-13 §
20:47 <Krenair> Created SRV record _etcd._tcp.beta.wmflabs.org for etcd/confd [releng]
17:02 <marxarelli> re-enabling beta cluster jenkins jobs following maintenance window [releng]
16:59 <marxarelli> aborting beta cluster db migration due to time constraints and ops outage. will reschedule [releng]
15:34 <marxarelli> disabled beta jenkins builds while in maintenance mode [releng]
2016-09-12 §
14:41 <elukey> applied base::firewall, beta::deployaccess, mediawiki::conftool, role::mediawiki::appserver to deployment-mediawiki04.deployment-prep.eqiad.wmflabs (Debian jessie instance) - T144006 [releng]
12:50 <gehel> rolling back upgrading elasticsearch to 2.4.0 on deployment-elastic05 - T145058 [releng]
12:03 <gehel> upgrading elasticsearch to 2.4.0 on deployment-elastic0? - T145058 [releng]
12:01 <hashar> Gerrit: made analytics-wmde group to be owned by themselves [releng]
11:57 <hashar> Gerrit: added ldap/wmde as an included group of the 'wikidata' group. Asked by and demoed to addshore [releng]
2016-09-11 §
20:35 <Krenair> started cron service on deployment-salt02 again, seems it got killed Tue 2016-08-30 13:42:39 UTC - hopefully this will fix the puppet staleness alert [releng]
18:45 <legoktm> deploying https://gerrit.wikimedia.org/r/309829 [releng]
2016-09-09 §
20:53 <thcipriani> testing scap 3.2.5-1 on beta cluster [releng]
11:08 <hashar> Added git tag for latest versions of mediawiki/selenium and mediawiki/ruby/api [releng]
09:30 <legoktm> Image ci-jessie-wikimedia-1473412532 in wmflabs-eqiad is ready [releng]
08:53 <legoktm> added phpflavor-php70 label to integration-slave-jessie-100[1-5] [releng]
08:49 <legoktm> deploying https://gerrit.wikimedia.org/r/309048 [releng]
2016-09-08 §
21:33 <hashar> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/309413 " Inject PHP_BIN=php5 for php53 jobs" [releng]
20:00 <hashar> nova delete ci-jessie-wikimedia-369422 (was stuck in deleting state) [releng]
19:49 <hashar> Nodepool, deleting instances that Nodepool lost track of (from nodepool alien-list) [releng]
19:47 <hashar> nodepool cant delete: ci-jessie-wikimedia-369422 [ delete | 2.24 hours . Stuck in task_state=deleting :( [releng]
19:46 <hashar> Nodepool looping over some tasks since 17:45 ( https://grafana.wikimedia.org/dashboard/db/nodepool?panelId=21&fullscreen ) [releng]
19:26 <legoktm> repooled integration-slave-jessie-1005 now that php7 testing is done [releng]
19:19 <hashar> integration: salt -v '*' cmd.run 'cd /srv/deployment/integration/slave-scripts; git pull' | https://gerrit.wikimedia.org/r/308931 [releng]
19:12 <hashar> integration: salt -v '*' cmd.run 'cd /srv/deployment/integration/slave-scripts; git pull' | https://gerrit.wikimedia.org/r/309272 [releng]
17:08 <legoktm> deleted integration-jessie-lego-test01 [releng]
16:50 <legoktm> deleted integration-aptly01 [releng]
10:03 <hashar> Delete Jenkins job https://integration.wikimedia.org/ci/job/mwext-VisualEditor-sync-gerrit/ that has been left behind. It is no more needed. T51846 T86659 [releng]
10:02 <hashar> Delete mwext-VisualEditor-sync-gerrit job, already got removed by ostriches in 139d17c8f1c4bcf2bb761e13a6501e4d85684066 . The issue in Gerrit (T51846) has been fixed. Poke T86659 , one less job on slaves. [releng]
02:25 <Krenair> deployed the latest version of mediawiki/services/parsoid/deploy.git to get https://gerrit.wikimedia.org/r/#/c/309001/ see T144884 [releng]
2016-09-07 §
20:44 <matt_flaschen> Re-enabled beta-code-update-eqiad . [releng]
20:35 <hashar> Updated security group for deployment-prep labs project. Allow ssh port 22 from contint1001.wikimedia.org (matching rules for gallium). T137323 [releng]
20:30 <hashar> Updated security group for contintcloud and integration labs project. Allow ssh port 22 from contint1001.wikimedia.org (matching rules for gallium). T137323 [releng]
20:14 <matt_flaschen> Temporarily disabled https://integration.wikimedia.org/ci/view/Beta/job/beta-scap-eqiad/ to test live revert of aa0f6ea [releng]
16:09 <hashar> Nodepool back in action. Had to manually delete some instances in labs [releng]
15:58 <hashar> Restarting Nodepool . Lost state when labnet got moved T144945 [releng]