4651-4700 of 9281 results (23ms)
2016-05-04 §
10:53 <hashar> beta: clearing out leftover apt conf that points to unreachable web proxy : salt -v '*' cmd.run "find /etc/apt -name '*-proxy' -delete" [releng]
10:48 <hashar> Manually fixing nginx upgrade on deployment-cache-text04 and deployment-cache-upload04 see T134362 for details [releng]
09:27 <hashar> deployment-cache-text04 systemctl stop varnish-frontend.service . To clear out all the stuck CLOSE_WAIT connections T134346 [releng]
08:33 <hashar> fixed puppet on deployment-cache-text04 (race condition generating puppet.conf ) [releng]
2016-05-03 §
23:21 <bd808> Changed "Maximum Number of Retries" for ssh agent launch in jenkins for deployment-tin from "0" to "10" [releng]
23:01 <twentyafterfour> rebooting deployment-tin [releng]
23:00 <bd808> Jenkins agent on deployment-tin not spawning; investigating [releng]
20:02 <hashar> Restarting Jenkins [releng]
16:49 <hashar> Notice: /Stage[main]/Contint::Packages::Python/Package[pypy]/ensure: ensure changed 'purged' to 'present' | T134235 [releng]
16:46 <hashar> Refreshing Nodepool Jessie image to have it include pypy | T134235 poke @jayvdb [releng]
14:49 <mobrovac> deployment-tin rebooting it [releng]
14:25 <hashar> beta salt -v '*' pkg.upgrade [releng]
14:19 <hashar> beta: added unattended upgrade to Hiera::deployment-prep [releng]
13:30 <hashar> Restarted nslcd on deployment-tin , pam was refusing authentication for some reason [releng]
13:29 <hashar> beta: got rid of a leftover Wikidata/Wikibase patch that broke scap salt -v 'deployment-tin*' cmd.run 'sudo -u jenkins-deploy git -C /srv/mediawiki-staging/php-master/extensions/Wikidata/ checkout -- extensions/Wikibase/lib/maintenance/populateSitesTable.php' [releng]
13:23 <hashar> deployment-tin force upgraded HHVM from 3.6 to 3.12 [releng]
09:42 <hashar> adding puppet class contint::slave_scripts to deployment-sca01 and deployment-sca02 . Ships multigit.sh T134239 [releng]
09:31 <hashar> Deleting CI slave deployment-cxserver03 , added deployment-sca01 and deployment-sca02 in Jenkins. T134239 [releng]
09:28 <hashar> deployment-sca01 removing puppet lock /var/lib/puppet/state/agent_catalog_run.lock and running puppet again [releng]
09:26 <hashar> Applying puppet class role::ci::slave::labs::common on deployment-sca01 and deployment-sca02 (cxserver and parsoid being migrated T134239 ) [releng]
03:33 <kart_> Deleted deployment-cxserver03, replaced by deployment-sca0x [releng]
2016-05-02 §
21:27 <cscott> updated OCG to version b775e612520f9cd4acaea42226bcf34df07439f7 [releng]
21:26 <hashar> Nodepool is acting just fine: Demand from gearman: ci-trusty-wikimedia: 457 | <AllocationRequest for 455.0 of ci-trusty-wikimedia> [releng]
21:25 <hashar> restarted qa-morebots "2016-05-02 21:22:23,599 ERROR: Died in main event loop" [releng]
21:23 <hashar> gallium: enqueued 488 jobs directly in Gearman. That is to test https://gerrit.wikimedia.org/r/#/c/286462/ ( mediawiki/extensions to hhvm/zend5.5 on Nodepool). Progress /home/hashar/gerrit-286462.log [releng]
21:20 <hashar> gallium: enqueued 488 jobs directly in Gearman. That is to test https://gerrit.wikimedia.org/r/#/c/286462/ ( mediawiki/extensions to hhvm/zend5.5 on Nodepool). Progress /home/hashar/gerrit-286462.log [releng]
21:19 <cscott> updated OCG to version b775e612520f9cd4acaea42226bcf34df07439f7 [releng]
20:14 <hashar> MediaWiki phpunit jobs to run on Nodepool instances \O/ [releng]
16:41 <urandom> Forcing puppet run and restarting Cassandra on deployment-restbase0[1-2] : T126629 [releng]
16:40 <urandom> Cherry-picking https://gerrit.wikimedia.org/r/operations/puppet refs/changes/78/284078/12 to deployment-puppetmaster : T126629 [releng]
16:24 <urandom> Restarat Cassandra on deployment-restbase0[1-2] : T126629 [releng]
16:21 <urandom> forcing puppet run on deployment-restbase0[1-2] : T126629 [releng]
16:20 <urandom> cherry-picking latest refs/changes/78/284078/11 onto deployment-puppetmaster : T126629 [releng]
09:44 <hashar> On zuul-merger instances (gallium / scandium), cleared out pywikibot/core working copy ( rm -fR /srv/ssd/zuul/git/pywikibot/core/ ) T134062 [releng]
2016-04-30 §
18:31 <Amir1> deploying d4f63a3 from github.com/wiki-ai/ores-wikimedia-config into targets in beta cluster via scap3 [releng]
2016-04-29 §
16:37 <jzerebecki> restarting zuul for 4e9d180..ebb191f [releng]
15:45 <hashar> integration: deleting integration-trusty-1026 and cache-rsync . Maybe that will clear them up from Shinken [releng]
15:14 <hashar> integration: created 'cache-rsync' and 'integration-trusty-1026' , attempting to have Shinken to deprovision them [releng]
2016-04-28 §
22:03 <urandom> deployment-restbase01 upgrade to 2.2.6 complete : T126629 [releng]
21:56 <urandom> Stopping Cassandra on deployment-restbase01, upgrading package to 2.2.6, and forcing puppet run : T126629 [releng]
21:55 <urandom> Snapshotting Cassandra tables on deployment-restbase01 (name = 1461880519833) : T126629 [releng]
21:55 <urandom> Snapshotting Cassandra tables on deployment-restbase01 : T126629 [releng]
21:52 <urandom> Forcing puppet run on deployment-restbase02 : T126629 [releng]
21:51 <urandom> Cherry picking operations/puppet refs/changes/78/284078/10 to puppmaster : T126629 [releng]
20:46 <urandom> Starting Cassandra on deployment-restbase02 (now v2.2.6) : T126629 [releng]
20:41 <urandom> Re-enable puppet and force run on deployment-restbase02 : T126629 [releng]
20:38 <urandom> Halting Cassandra on deployment-restbase02, masking systemd unit, and upgrading package(s) to 2.2.6 : T126629 [releng]
20:37 <urandom> Snapshotting Cassandra tables on deployment-restbase02 (snapshot name = 1461875833996) : T126629 [releng]
20:37 <urandom> Snapshotting Cassandra tables on deployment-restbase02 : T126629 [releng]
20:33 <urandom> Cassandra on deployment-restbase01.deployment-prep started : T126629 [releng]