3451-3500 of 4692 results (14ms)
2014-03-20 §
23:46 <bd808> Mounted secondary disk as /var/lib/elasticsearch on deployment-logstash1 [releng]
23:46 <bd808> Converted deployment-tin to use local puppet & salt masters [releng]
22:09 <hashar> Migrated videoscaler01 to use self salt/puppet masters. [releng]
21:30 <hashar> manually installing timidity-daemon on jobrunner01.eqiad so puppet can stop it and stop whining [releng]
21:00 <hashar> migrate jobrunner01.eqiad.wmflabs to self puppet/salt masters [releng]
20:55 <hashar> deleting deployment-jobrunner02 , lets start with a single instance for nwo [releng]
20:51 <hashar> Creating deployment-jobrunner01 and 02 in eqiad. [releng]
15:47 <hashar> fixed salt-minion service on deployment-cache-upload01 and deployment-cache-mobile03 by deleting /etc/salt/pki/minion/minion_master.pub [releng]
15:30 <hashar> migrated deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs to use the salt/puppetmaster deployment-salt.eqiad.wmflabs. [releng]
15:30 <hashar> deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs recovered!! /dev/vdb does not exist on eqiad which caused the instance to be stalled. [releng]
10:48 <hashar> Stopped the simplewiki script. Would need to recreate the db from scratch instead [releng]
10:37 <hashar> Cleaning up simplewiki by deleting most pages in the main namespace. Would free up some disk space. deleteBatch.php is running in a screen on deployment-bastion.pmtpa.wmflabs [releng]
10:08 <hashar> applying role::labs::lvm::mnt on deployment-db1 to provide additional disk space on /mnt [releng]
09:39 <hashar> convert all remaining hosts but db1 to use the local puppet and salt masters [releng]
2014-03-19 §
21:23 <bd808> Converted deployment-cache-text02 to use local puppet & salt masters [releng]
20:21 <hashar> migrating eqiad varnish caches to use xfs [releng]
17:58 <bd808> Converted deployment-parsoid04 to use local puppet & salt masters [releng]
17:51 <bd808> Converted deployment-eventlogging02 to use local puppet & salt masters [releng]
17:22 <bd808> Converted deployment-cache-bits01 to use local puppet & salt masters; puppet:///volatile/GeoIP not found on deployment-salt puppetmaster [releng]
17:00 <bd808> Converted deployment-apache02 to use local puppet & salt masters [releng]
16:49 <bd808> Converted deployment-apache01 to use local puppet & salt masters [releng]
16:30 <hashar> Varnish caches in eqiad are failing puppet because there is no /dev/vdb. Will figure it out tomorrow :-] [releng]
16:15 <hashar> Applying role::logging::mediawiki::errors on deployment-fluoride.eqiad.wmflabs . It is not receiving anything yet though. [releng]
15:50 <hashar> fixed upd2log-mw daemon not starting on eqiad bastion ( /var/log/udp2log belonged to wrong UID/GID) [releng]
15:49 <hashar> deleted local user l10nupdate on deployment-bastion. It is in ldap now. [releng]
2014-03-17 §
15:02 <hashar> Starting copying /data/project from ptmpa to eqiad [releng]
14:46 <hashar> manually purging all commonswiki archived files (on beta of course) [releng]
2014-03-14 §
14:47 <hashar> changing uid/gid of mwdeploy which is now provisioned via LDAP (aka deleting local user and group on all instance + file permissions tweaks) [releng]
2014-03-11 §
10:46 <hashar> dropping some unused databases from deployment-sql instance. [releng]
2014-03-10 §
11:09 <hashar> Deleting http://simple.wikipedia.beta.wmflabs.org/wiki/MediaWiki:Robots.txt [releng]
09:54 <hashar> Reducing memcached instances to 3GB ( {{gerrit|115617}} ). Seems to fix writing to the EQIAD memcaches which only have 3GB [releng]
09:08 <hashar> Restarted bits cache (CPU / mem overload) [releng]
2014-03-06 §
09:07 <hashar> restarted varnish and varnish-frontend on deployment-cache-text1 [releng]
2014-03-05 §
17:26 <hashar> hacked in mwversioninuse to return "master=aawiki". Relaunched l10n job using mwdeploy user and then running mw-update-l10n [releng]
17:07 <hashar> mwversioninuse gives a wmf branch instead of master. That breaks l10n messages update and the job https://integration.wikimedia.org/ci/job/beta-code-update/ . Root cause is the python based scap. [releng]
2014-03-03 §
17:28 <manybubbles> doing an Elasticsearch reindex on beta before I try another one in production [releng]
2014-02-28 §
10:17 <hashar> Puppet running on varnish upload cache after several months. Might break random things in the process :( [releng]
2014-02-27 §
14:11 <manybubbles> upgrading beta to Elasticsearch 1.0 [releng]
2014-02-26 §
20:44 <hashar> Cleaning up commonswiki archived files with mwscript deleteArchivedFiles.php --wiki=commonswiki --delete [releng]
20:44 <hashar> deleted all files from http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload (gwtoolset import test). Deleted File:Title_0* (Selenium tests). [releng]
15:06 <hashar> deleted all thumbs from shared directory: /data/project/upload7/*/*/thumb/* [releng]
14:54 <hashar> cleaning out 2013 archived logs. [releng]
2014-02-25 §
08:42 <hashar> Upgrading all varnishes. [releng]
2014-02-24 §
23:36 <MaxSem> Rolled back [releng]
23:25 <hoo> recursively chowned extensions/MobileFrontend to mwdeploy:mwdeploy [releng]
23:21 <hoo> chowned /data/project/apache/common-local/php-master/extensions/.git/modules/MobileFrontend/* to mwdeploy:mwdeploy [releng]
17:47 <MaxSem> Investigating a mobile bug, might cause intermittent problems [releng]
17:36 <MaxSem> Rebooted deployment-cache-mobile01 - was impossible to log into it though Varnish still worked [releng]
2014-02-21 §
19:42 <MaxSem> Adjusted read privs on /home/wikipedia/syslog/apache.log to allow fatalmonitor to work [releng]
2014-02-19 §
16:24 <hashar> -bastion : /etc/init.d/udp2log stop && /etc/init.d/udp2log-mw start (known bug) [releng]