1051-1100 of 10000 results (40ms)
2017-08-07 §
09:06 <elukey> set net.netfilter.nf_conntrack_tcp_timeout_time_wait=65 (was 120) on all the analytics kafka brokers - T136094 [production]
09:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2065 after fixing: linter, page and watchlist tables (duration: 00m 47s) [production]
08:12 <marostegui> Force BBU re-learn on db1016 - T166344 [production]
07:02 <marostegui> Stop replication on db2065 to reimport: page, linter and watchlist tables [production]
07:02 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2065 to reimport: page, linter and watchlist tables (duration: 00m 47s) [production]
06:38 <marostegui> Stop MySQL on db2074 - T171321 [production]
06:37 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2074 - T171321 (duration: 00m 46s) [production]
06:33 <marostegui> Stop replication on db2075 - T170662 [production]
06:27 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2073 - T171321 (duration: 00m 47s) [production]
06:20 <marostegui> Force BBU re-learn on db1016 - T166344 [production]
02:57 <l10nupdate@tin> ResourceLoader cache refresh completed at Mon Aug 7 02:57:42 UTC 2017 (duration 6m 42s) [production]
02:51 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.12) (duration: 07m 56s) [production]
02:30 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.11) (duration: 10m 16s) [production]
2017-08-06 §
13:17 <elukey> powercycle mw2256 - com2 frozen - T163346 [production]
13:13 <elukey> restart pdfrender on scb1002 [production]
06:18 <ebernhardson@tin> Synchronized wmf-config/PoolCounterSettings.php: T169498: Reduce cirrus search pool counter to 200 parallel requests cluster wide (duration: 02m 54s) [production]
01:28 <chasemp> conf2002:~# service etcdmirror-conftool-eqiad-wmnet restart (not sure what else to do the service failed) [production]
2017-08-05 §
14:40 <Reedy> created oauth tables on foundationwiki T172591 [production]
14:13 <reedy@tin> Synchronized php-1.30.0-wmf.12/extensions/WikimediaMaintenance/createExtensionTables.php: add oauth (duration: 00m 48s) [production]
2017-08-04 §
23:51 <mutante> phab2001 - removed outdated /etc/hosts entries, that fixed rsync, syncing /srv/repos/ from phab1001 [production]
23:35 <mutante> phab2001 rebooting [production]
23:35 <mutante> phab2001 - installing various package upgrades, apt-get autoremove old kernel images [production]
23:12 <mutante> "reserved" UID 498 for phd on https://wikitech.wikimedia.org/wiki/UID | phab2001: find -exec chown to fix all the files , restart cron [production]
23:04 <mutante> phab2001 - changing UID/GID for phd user from 997:997 to 498:498 to make it match phab1001, to fix rsync breaking permissions. (rsync forces --numeric-ids when fetching from and rsyncd configured with chroot=yes). chown -R phw:www-data /srv/repos/ [production]
22:37 <ejegg> restarted donations and refund queue consumers [production]
21:44 <ejegg> stopped donations and refund queue consumers [production]
21:24 <urandom> T172384: Disabling Puppet in dev environment to prevent unattended Cassandra restarts [production]
20:19 <mutante> renewing SSL cert for status.wm.org (just like wikitech-static, but that one didnt have monitoring?) [production]
20:02 <mutante> wikitech-static-ord - apt-get install certbot [production]
19:41 <ejegg> updated CiviCRM from f1fd7f0f9e89f59a8fc4daaa5e95803a2f60acbb to f24ba787f711ed38029594f3f3049bd79221ddd7 [production]
19:38 <mutante> renaming graphite varnish director/fixing config, running puppet on cache misc, tested on cp1045 [production]
18:17 <andrewbogott> switched most cloud instance to new puppetmasters, as per https://phabricator.wikimedia.org/T171786 [production]
11:46 <marostegui> Deploy schema change directly on s3 master for maiwikimedia - T172485 [production]
11:30 <marostegui> Deploy schema change directly on s3 master for kbpwiki - T172485 [production]
11:14 <marostegui> Deploy schema change directly on s3 master for dinwiki - T172485 [production]
10:14 <marostegui> Deploy schema change directly on s3 master for atjwiki - T172485 [production]
10:05 <marostegui> Stop replication on db2073 for maintenance [production]
09:22 <marostegui> Add dbstore2002 to tendril - T171321 [production]
09:19 <marostegui> Deploy schema change directly on s3 master for techconductwiki - T172485 [production]
08:35 <marostegui> Deploy schema change directly on s3 master for hiwikiversity - T172485 [production]
08:19 <marostegui> Deploy schema change directly on s3 master for wikimania2018wiki - T172485 [production]
08:04 <marostegui> Sanitize wikimania2018wiki on sanitarium and sanitarium2 - T155041 [production]
07:47 <marostegui> Stop MySQL on db2073 to copy its data to dbstore2002 - https://phabricator.wikimedia.org/T171321 [production]
07:47 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2073 - T171321 (duration: 00m 47s) [production]
07:07 <moritzm> installing imagemagick regression security updates on trusty [production]
06:47 <marostegui> Sanitize hiwikiversity on sanitarium and sanitarium2 - T171829 [production]
05:23 <mutante> phab1001 sudo ip addr del 10.64.32.186/32 dev eth0 (T172478) [production]
02:28 <dzahn@neodymium> conftool action : set/pooled=yes; selector: name=phab1001-vcs.eqiad.wmnet [production]
02:28 <dzahn@neodymium> conftool action : set/pooled=no; selector: name=phab1001-vcs.eqiad.wmnet [production]
02:15 <mutante> phab1001 can't talk to mx servers via IPv6, but works via IPv4. iridium and other mailservers can also talk IPv6 to it. why? it did not change even when stopping ferm on client and on server it allows from anywhere. workaround for now was to hardcode IPv4 IP in phab config. (T163938) [production]