2151-2200 of 3408 results (15ms)
2014-03-24 §
22:57 <hashar> restarting deployment-cache-upload04 , apparently stalled\t [releng]
22:48 <hashar> upgrading varnish on all pmtpa caches. [releng]
22:47 <hashar> apt-get upgrade varnish on deployment-cache-bits03 [releng]
22:45 <marktraceur> attempted restart of varnish on betalabs; seems to have failed, trying again [releng]
22:42 <hashar> made marktraceur a project admin and granted sudo rights [releng]
22:39 <marktraceur> Restarting betalabs varnish to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 [releng]
17:25 <bd808> Converted deployment-db1.eqiad.wmflabs to use local puppet & salt masters [releng]
17:06 <bd808> Changed rules in sql security group to use CIDR 10.0.0.0/8. [releng]
17:05 <bd808> Changed rules in search security group to use CIDR 10.0.0.0/8. [releng]
17:05 <bd808> Built deployment-elastic04.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [releng]
16:19 <bd808> Built deployment-elastic03.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [releng]
16:08 <bd808> Built deployment-elastic02.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [releng]
15:54 <bd808> Built deployment-elastic01.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server [releng]
10:31 <hashar> migrated deployment-solr to self puppet/salt masters [releng]
2014-03-21 §
09:29 <hashar> l10ncache is now rebuild properly : https://integration.wikimedia.org/ci/job/beta-code-update/53508/console [releng]
09:23 <hashar> fixing l10ncache on deplkoyment-bastion : <tt>chown -R l10nupdate:l10nupdate /data/project/apache/common-local/php-master/cache/l10n </tt> The l10nupdate UID/GID has been changed and are now in LDAP [releng]
2014-03-20 §
23:46 <bd808> Mounted secondary disk as /var/lib/elasticsearch on deployment-logstash1 [releng]
23:46 <bd808> Converted deployment-tin to use local puppet & salt masters [releng]
22:09 <hashar> Migrated videoscaler01 to use self salt/puppet masters. [releng]
21:30 <hashar> manually installing timidity-daemon on jobrunner01.eqiad so puppet can stop it and stop whining [releng]
21:00 <hashar> migrate jobrunner01.eqiad.wmflabs to self puppet/salt masters [releng]
20:55 <hashar> deleting deployment-jobrunner02 , lets start with a single instance for nwo [releng]
20:51 <hashar> Creating deployment-jobrunner01 and 02 in eqiad. [releng]
15:47 <hashar> fixed salt-minion service on deployment-cache-upload01 and deployment-cache-mobile03 by deleting /etc/salt/pki/minion/minion_master.pub [releng]
15:30 <hashar> migrated deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs to use the salt/puppetmaster deployment-salt.eqiad.wmflabs. [releng]
15:30 <hashar> deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs recovered!! /dev/vdb does not exist on eqiad which caused the instance to be stalled. [releng]
10:48 <hashar> Stopped the simplewiki script. Would need to recreate the db from scratch instead [releng]
10:37 <hashar> Cleaning up simplewiki by deleting most pages in the main namespace. Would free up some disk space. deleteBatch.php is running in a screen on deployment-bastion.pmtpa.wmflabs [releng]
10:08 <hashar> applying role::labs::lvm::mnt on deployment-db1 to provide additional disk space on /mnt [releng]
09:39 <hashar> convert all remaining hosts but db1 to use the local puppet and salt masters [releng]
2014-03-19 §
21:23 <bd808> Converted deployment-cache-text02 to use local puppet & salt masters [releng]
20:21 <hashar> migrating eqiad varnish caches to use xfs [releng]
17:58 <bd808> Converted deployment-parsoid04 to use local puppet & salt masters [releng]
17:51 <bd808> Converted deployment-eventlogging02 to use local puppet & salt masters [releng]
17:22 <bd808> Converted deployment-cache-bits01 to use local puppet & salt masters; puppet:///volatile/GeoIP not found on deployment-salt puppetmaster [releng]
17:00 <bd808> Converted deployment-apache02 to use local puppet & salt masters [releng]
16:49 <bd808> Converted deployment-apache01 to use local puppet & salt masters [releng]
16:30 <hashar> Varnish caches in eqiad are failing puppet because there is no /dev/vdb. Will figure it out tomorrow :-] [releng]
16:15 <hashar> Applying role::logging::mediawiki::errors on deployment-fluoride.eqiad.wmflabs . It is not receiving anything yet though. [releng]
15:50 <hashar> fixed upd2log-mw daemon not starting on eqiad bastion ( /var/log/udp2log belonged to wrong UID/GID) [releng]
15:49 <hashar> deleted local user l10nupdate on deployment-bastion. It is in ldap now. [releng]
2014-03-17 §
15:02 <hashar> Starting copying /data/project from ptmpa to eqiad [releng]
14:46 <hashar> manually purging all commonswiki archived files (on beta of course) [releng]
2014-03-14 §
14:47 <hashar> changing uid/gid of mwdeploy which is now provisioned via LDAP (aka deleting local user and group on all instance + file permissions tweaks) [releng]
2014-03-11 §
10:46 <hashar> dropping some unused databases from deployment-sql instance. [releng]
2014-03-10 §
11:09 <hashar> Deleting http://simple.wikipedia.beta.wmflabs.org/wiki/MediaWiki:Robots.txt [releng]
09:54 <hashar> Reducing memcached instances to 3GB ( {{gerrit|115617}} ). Seems to fix writing to the EQIAD memcaches which only have 3GB [releng]
09:08 <hashar> Restarted bits cache (CPU / mem overload) [releng]
2014-03-06 §
09:07 <hashar> restarted varnish and varnish-frontend on deployment-cache-text1 [releng]
2014-03-05 §
17:26 <hashar> hacked in mwversioninuse to return "master=aawiki". Relaunched l10n job using mwdeploy user and then running mw-update-l10n [releng]