6951-7000 of 10000 results (30ms)
2017-02-03 §
11:09 <hashar> beta: removed old kernels from deployment-redis02 to free up disk space [releng]
10:42 <hashar> Image ci-jessie-wikimedia-1486115643 in wmflabs-eqiad is ready T156923 [releng]
10:12 <hashar> Image ci-jessie-wikimedia-1486115643 in wmflabs-eqiad is ready T156923 [releng]
09:54 <hashar> Regenerate Nodepool Jessie snapshot. Would get a new HHVM version T156923 [releng]
2017-02-02 §
21:56 <hashar> integration-slave-jessie-1001 wiping /srv/pbuilder/base-trusty-amd64.cow it was not properly provisioned causing build to fail (eg lack of /etc/hosts) Running puppet to reprocvision it (poke T156651) [releng]
16:26 <Amir1> deploying 9fd75a1 ores in beta [releng]
16:17 <hashar> integration-slave-jessie-1001 wiping /srv/pbuilder/base-trusty-i386.cow/ it was not properly provisioned causing build to fail (eg lack of /etc/hosts) Running puppet to reprocvision it (poke T156651) [releng]
14:15 <hashar> Nodepool: delete the image building of Jessie (image id 1322) to prevent a faulty HHVM version from being added. T156923 [releng]
00:52 <tgr> added mhurd as member [releng]
2017-02-01 §
21:43 <bearND> Update mobileapps to e48a88c [releng]
18:51 <thcipriani> nodepool delete-image 1320 per T156923 [releng]
14:53 <gehel> deployment-elastic* fully migrated to Jessie and /srv as data partition - T151326 [releng]
14:52 <gehel> killing test node deployment-elastic08 - T151326 [releng]
14:32 <gehel> shutting down and reimaging deployment-elastic07 - T151326 [releng]
14:06 <gehel> shutting down and reimaging deployment-elastic06 - T151326 [releng]
13:34 <gehel> shutting down and reimaging deployment-elastic05 - T151326 [releng]
13:29 <gehel> starting deployment-elastic* migration to jessie and moving data partition to /srv (T151326 / T151328) [releng]
13:18 <moritzm> upgraded deployment-prep to hhvm 3.12.12 [releng]
2017-01-31 §
22:12 <thcipriani> started mysql on all integration precise instances via salt -- was stopped for some reason [releng]
01:59 <bd808> nodepool is full of instance stuck in "delete" [releng]
01:53 <bd808> https://integration.wikimedia.org/zuul/ showing huge backlogs but https://integration.wikimedia.org/ci/ looks mostly idle [releng]
2017-01-26 §
14:25 <hashar> Created Github repo for Gerrit replication https://github.com/wikimedia/mediawiki-libs-phpstorm-stubs T153252 [releng]
13:49 <hashar> Gerrit creating mediawiki/libs/phpstorm-stubs to fork https://github.com/JetBrains/phpstorm-stubs for T153252 [releng]
2017-01-24 §
11:04 <hashar> Deleting integration-publisher (Precise) replaced by integration-publishing (Jessie). T156064 T143349 [releng]
2017-01-23 §
23:41 <bearND> Update mobileapps to 66ef3c2 [releng]
21:05 <hashar> Created integration-publishing Jessie instance 10.68.23.254 with puppet class role::ci::publisher::labs . Meant to replace Precise instance integration-publisher T156064 [releng]
12:45 <hashar> Image ci-jessie-wikimedia-1485174573 in wmflabs-eqiad is ready | should no more spawn varnish on boot [releng]
09:02 <hashar> Archiving Gerrit project wikidata/gremlin marking it read-only T155829 [releng]
07:15 <_joe_> cherry-picking the move of base to profile::base [releng]
2017-01-21 §
21:20 <hashar> integration: updating slave scripts for https://gerrit.wikimedia.org/r/#/c/333389/ [releng]
21:08 <bd808> Puppet failures on deployment-restbase0[12] seem to be some sort of hang of the Puppet process itself. Run prints "Finished catalog run in 2n.nn seconds" but Puppet doesn't terminate for about a minute longer. The only state change logged is cassandra-metrics-collector service start. [releng]
2017-01-20 §
10:14 <hashar> puppet fails on "integration" labs instances due to an attempt to unmount the non existing NFS /home. Filled T155820 [releng]
09:18 <hashar> beta: reset workspace of /srv/mediawiki-staging/php-master/extensions/reCaptcha it had a .gitignore local hack for some reason [releng]
09:05 <hashar> integration restarted mysql on trusty permanent slaves T141450 T155815 salt -v '*trusty*' cmd.run 'service mysql start' [releng]
2017-01-19 §
22:11 <Krenair> added bunch of others to the same group per request. we should figure out how to make this process sane somehow [releng]
22:06 <Krenair> added nuria to deploy-service group on deployment-tin [releng]
16:56 <hashar> rebased puppet master on integration and deployment-prep Trivial conflict between https://gerrit.wikimedia.org/r/#/c/312523/ and a lint change [releng]
09:36 <hashar> Nuking workspaces of all mwext-testextension-hhvm-composer* jobs. Lame attempt for T155600. salt -v '*slave*' cmd.run 'rm -fR /srv/jenkins-workspace/workspace/mwext-testextension-hhvm-composer*' [releng]
2017-01-18 §
10:49 <hashar> Disconnected/connected Jenkins Gearman client. The beta cluster builds had a deadlock. [releng]
10:39 <hashar> Image ci-jessie-wikimedia-1484735445 in wmflabs-eqiad is ready (add python-conftool to hopefully have puppet rspec pass on https://gerrit.wikimedia.org/r/#/c/332475/ ) [releng]
2017-01-17 §
21:47 <urandom> deployment-prep restarting Cassandra on deployment-restbase02 [releng]
21:46 <urandom> deployment-prep restarting Cassandra on deployment-restbase01 [releng]
19:02 <thcipriani> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/#/c/332534/ [releng]
18:25 <thcipriani> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/#/c/332521/ [releng]
18:07 <urandom> deployment-prep restarting Cassandra on deployment-restbase01 [releng]
17:50 <urandom> re-enabling puppet on deployment-restbase02 [releng]
17:47 <urandom> re-enabling puppet on deployment-restbase01 [releng]
10:32 <hashar> Refreshing all jobs in Jenkins 'jenkins-jobs --conf jenkins_jobs.ini update config/jjb' [releng]
2017-01-16 §
09:33 <hashar> integration nuked the Zuul merger path for SelectTag mw extension ( on scandium /srv/ssd/zuul/git/mediawiki/extensions/SelectTag ) Failed to merge https://gerrit.wikimedia.org/r/#/c/331974/ [releng]
2017-01-12 §
00:33 <legoktm> deploying https://gerrit.wikimedia.org/r/331796 and https://gerrit.wikimedia.org/r/331795 [releng]