2019-03-03
§
|
20:54 |
<andrewbogott> |
cleaning out /tmp on tools-exec-1412 |
[tools] |
12:26 |
<volans|off> |
restarted icinga on icinga2001, stale status file, too many open files |
[production] |
10:44 |
<elukey> |
restart pdfrender on scb1003 |
[production] |
02:26 |
<Krinkle> |
tried rebooting or shutting down integration-slave-docker-1021, no response on horizon. Did pause/resume instead, which did work, after which shutdown/start worked. Jenkins agent has been relaunched and seems online again. |
[releng] |
02:20 |
<Krinkle> |
integration-slave-docker-1021 (ci1.medium) has jobs failing on it due to ENOMEM. Horizon shows in log: integration-slave-docker-1021 login: [4961938.696837] Out of memory: Kill process 21770 (chromium) score 841 or sacrifice child; [4961938.699176] Killed process 21770 (chromium) total-vm:3171496kB, anon-rss:1379288kB, file-rss:0kB, shmem-rss:1636kB |
[releng] |
2019-03-02
§
|
22:12 |
<Krinkle> |
Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/493959 |
[releng] |
21:37 |
<valhallasw`cloud> |
Converted tvpupdater & archivering. Also upgraded the latter to Python 3 & pywikibot-core. |
[tools.nlwikibots] |
20:44 |
<hauskatze> |
Renamed https://github.com/wikimedia/wikimedia-github-community-health-defaults to https://github.com/wikimedia/.github |
[releng] |
20:42 |
<hauskatze> |
ssh -p 29418 gerrit.wikimedia.org replication start wikimedia/github-community-health-defaults --wait |
[releng] |
20:40 |
<hauskatze> |
github created https://github.com/wikimedia/wikimedia-github-community-health-defaults |
[releng] |
20:31 |
<Reedy> |
reloading zuul to deploy https://gerrit.wikimedia.org/r/493881 |
[releng] |
20:30 |
<Krinkle> |
Failure on integration-slave-docker-1021 (ENOMEM) https://integration.wikimedia.org/ci/job/fresnel-node10-browser-docker/61/console |
[releng] |
19:51 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/493872 |
[releng] |
19:37 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/493862 |
[releng] |
19:31 |
<valhallasw`cloud> |
Cleaning up old (2017-2018) log files |
[tools.nlwikibots] |
18:55 |
<framawiki> |
block spammer https://quarry.wmflabs.org/Twc93521 `INSERT INTO user_group (user_id, group_name) VALUES (3734, "blocked");` |
[quarry] |
18:26 |
<Reedy> |
Reloading Zuul to deploy https://gerrit.wikimedia.org/r/493808 |
[releng] |
18:21 |
<Reedy> |
Reloading Zuul to deploy https://gerrit.wikimedia.org/r/493837 |
[releng] |
12:12 |
<gtirloni> |
labstore1006 started nfsd T217473 |
[production] |
2019-03-01
§
|
20:45 |
<ejegg> |
turned off fundraising omnimail process unsubscribes job |
[production] |
19:40 |
<XioNoX> |
pre-configure asw-a8 ports on asw2-a8-eqiad - T187960 |
[production] |
19:32 |
<XioNoX> |
pre-configure asw-a7 ports on asw2-a7-eqiad - T187960 |
[production] |
19:29 |
<XioNoX> |
pre-configure asw-a6 ports on asw2-a6-eqiad - T187960 |
[production] |
19:17 |
<thcipriani> |
integration-slave-docker-1021:/# docker rmi $(docker images | grep " months " |grep -v " [1-2] months " | awk '{print $3}') |
[releng] |
19:17 |
<XioNoX> |
pre-configure asw-a5 ports on asw2-a5-eqiad - T187960 |
[production] |
18:53 |
<robh> |
notebook1003 has unusually high load recently (23) and seemed to lag in reporting to icinga. no hardware failures, pinged about it in #wikimedia-analytics |
[production] |
17:02 |
<thcipriani> |
integration-slave-jessie-1004 back online |
[releng] |
16:58 |
<thcipriani> |
integration-slave-jessie-1002 back online (disk space looked fine); rebooting integration-slave-jessie-1004 -- can't ssh to machine |
[releng] |
16:33 |
<jbond42> |
rolling security update of bind9 packages on jessie and trusty |
[production] |
16:11 |
<Lucas_WMDE> |
delete refs/master and refs/gerrit/master on WikibaseQualityConstraints repository T217408 |
[releng] |
15:49 |
<hashar> |
wikidata/query/blazegraph change Gerrit config to require a change-id # T216855 |
[releng] |
15:44 |
<AmandaNP> |
installed missing requests_oauthlib via pip |
[utrs] |
15:43 |
<AmandaNP> |
installed missing "pip" |
[utrs] |
15:38 |
<AmandaNP> |
wget tested on utrs-production2 to verify errors in apache log are clear. Everything looks good |
[utrs] |
15:38 |
<ema> |
trafficserver_8.0.2-1wm1 uploaded to stretch-wikimedia |
[production] |
15:24 |
<AmandaNP> |
reset db password for deltaquad due to inability to login with right password |
[utrs] |
15:02 |
<akosiaris> |
restore proton config values |
[production] |
14:57 |
<AmandaNP> |
rebooting utrs-database2 |
[utrs] |
14:55 |
<AmandaNP> |
reinstalled python-mysqldb on utrs-database2 because it's coming up missing |
[utrs] |
14:50 |
<andrewbogott> |
rebooting utrs-production2 to resolve nfs-mounting issues |
[utrs] |
14:33 |
<hashar> |
Updating all debian-glue Jenkins job to properly take in account the BUILD_TIMEOUT parameter # T217403 |
[production] |
14:28 |
<hashar> |
Upgrading integration/jenkins-job-builder to version 2.0.2 + one custom hack 11aa5de4...a06d173e # T143731 |
[releng] |
14:18 |
<hashar> |
integration/jenkins-job-builder : importing upstream code to new branch "upstream". Push all upstream tags to our repository |
[releng] |
13:24 |
<moritzm> |
removed sca* hosts from debmonitor database |
[production] |
12:49 |
<akosiaris> |
lower max_render_queue_size: to 20 for proton on proton100{1,2} |
[production] |
12:32 |
<akosiaris> |
restart proton1002, OOM showed up |
[production] |
12:31 |
<akosiaris> |
restart proton on proton1001, counted 99 chromium processes left running since at least Jan 30 |
[production] |
11:47 |
<jbond42> |
rebooting labsdb1005.codfw.wmnet |
[production] |