2051-2100 of 10000 results (20ms)
2021-02-01 §
08:53 <gilles@deploy1001> Finished deploy [performance/navtiming@1e02d76]: T271208 Add more debug logging (duration: 00m 05s) [production]
08:53 <gilles@deploy1001> Started deploy [performance/navtiming@1e02d76]: T271208 Add more debug logging [production]
08:53 <elukey@cumin1001> START - Cookbook sre.dns.netbox [production]
08:45 <marostegui@cumin1001> dbctl commit (dc=all): 'db1175 (re)pooling @ 10%: Slowly pooling db1175 for the first time', diff saved to https://phabricator.wikimedia.org/P14075 and previous config saved to /var/cache/conftool/dbconfig/20210201-084531-root.json [production]
08:45 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1089 from dbctl T273417', diff saved to https://phabricator.wikimedia.org/P14074 and previous config saved to /var/cache/conftool/dbconfig/20210201-084523-marostegui.json [production]
08:42 <marostegui@cumin1001> dbctl commit (dc=all): 'db1166 (re)pooling @ 4%: Slowly pooling db1166 for the first time', diff saved to https://phabricator.wikimedia.org/P14073 and previous config saved to /var/cache/conftool/dbconfig/20210201-084211-root.json [production]
08:29 <marostegui@cumin1001> dbctl commit (dc=all): 'db1175 (re)pooling @ 7%: Slowly pooling db1175 for the first time', diff saved to https://phabricator.wikimedia.org/P14072 and previous config saved to /var/cache/conftool/dbconfig/20210201-082933-root.json [production]
08:27 <marostegui@cumin1001> dbctl commit (dc=all): 'db1166 (re)pooling @ 2%: Slowly pooling db1166 for the first time', diff saved to https://phabricator.wikimedia.org/P14071 and previous config saved to /var/cache/conftool/dbconfig/20210201-082707-root.json [production]
08:17 <godog> swift codfw-prod decrease HDD weight for ms-be20[16-27] - T272837 [production]
08:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1166 with minimal weight for the first time T258361', diff saved to https://phabricator.wikimedia.org/P14070 and previous config saved to /var/cache/conftool/dbconfig/20210201-081554-marostegui.json [production]
08:14 <marostegui@cumin1001> dbctl commit (dc=all): 'db1175 (re)pooling @ 5%: Slowly pooling db1175 for the first time', diff saved to https://phabricator.wikimedia.org/P14069 and previous config saved to /var/cache/conftool/dbconfig/20210201-081429-root.json [production]
08:05 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1166 to dbctl, depooled T258361', diff saved to https://phabricator.wikimedia.org/P14068 and previous config saved to /var/cache/conftool/dbconfig/20210201-080520-marostegui.json [production]
07:59 <marostegui@cumin1001> dbctl commit (dc=all): 'db1175 (re)pooling @ 3%: Slowly pooling db1175 for the first time', diff saved to https://phabricator.wikimedia.org/P14067 and previous config saved to /var/cache/conftool/dbconfig/20210201-075926-root.json [production]
07:44 <marostegui@cumin1001> dbctl commit (dc=all): 'db1175 (re)pooling @ 2%: Slowly pooling db1175 for the first time', diff saved to https://phabricator.wikimedia.org/P14066 and previous config saved to /var/cache/conftool/dbconfig/20210201-074422-root.json [production]
07:36 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1175 with some more minimal weight T258361', diff saved to https://phabricator.wikimedia.org/P14065 and previous config saved to /var/cache/conftool/dbconfig/20210201-073603-marostegui.json [production]
07:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1094 (re)pooling @ 100%: After fixing replication', diff saved to https://phabricator.wikimedia.org/P14064 and previous config saved to /var/cache/conftool/dbconfig/20210201-070429-root.json [production]
06:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db1094 (re)pooling @ 75%: After fixing replication', diff saved to https://phabricator.wikimedia.org/P14063 and previous config saved to /var/cache/conftool/dbconfig/20210201-064926-root.json [production]
06:39 <marostegui> Run analyze table on db2071 and db2102 [production]
06:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1094 (re)pooling @ 50%: After fixing replication', diff saved to https://phabricator.wikimedia.org/P14062 and previous config saved to /var/cache/conftool/dbconfig/20210201-063422-root.json [production]
06:23 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1175 to dbctl, depooled T258361', diff saved to https://phabricator.wikimedia.org/P14061 and previous config saved to /var/cache/conftool/dbconfig/20210201-062358-marostegui.json [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1094 (re)pooling @ 25%: After fixing replication', diff saved to https://phabricator.wikimedia.org/P14060 and previous config saved to /var/cache/conftool/dbconfig/20210201-061919-root.json [production]
06:10 <marostegui> Upgrade db2071 and db2102 to 10.4.18 [production]
06:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1094 (re)pooling @ 10%: After fixing replication', diff saved to https://phabricator.wikimedia.org/P14059 and previous config saved to /var/cache/conftool/dbconfig/20210201-060415-root.json [production]
05:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1094', diff saved to https://phabricator.wikimedia.org/P14058 and previous config saved to /var/cache/conftool/dbconfig/20210201-055851-marostegui.json [production]
2021-01-31 §
17:34 <Operator873> restarted CVNBot15 [cvn]
15:20 <wm-bot> <lucaswerkmeister> deployed 981843b704 (work around T222159) [tools.ranker]
15:16 <wm-bot> <lucaswerkmeister> deployed ef54a0b1a8 (handle missing entity error) [tools.ranker]
14:41 <wm-bot> <lucaswerkmeister> deployed a6392a7c30 (File:… as input on index page) [tools.ranker]
2021-01-30 §
20:25 <wm-bot> <lucaswerkmeister> deployed a87ce138db (show bulk parse errors) [tools.lexeme-forms]
13:16 <wm-bot> <lucaswerkmeister> deployed 50f5f35cb4 (singular/plural) [tools.ranker]
12:43 <wm-bot> <lucaswerkmeister> deployed 64cc29192a (format entity IDs) [tools.ranker]
2021-01-29 §
23:26 <razzi@cumin1001> END (PASS) - Cookbook sre.kafka.reboot-workers (exit_code=0) for Kafka test cluster: Reboot kafka nodes - razzi@cumin1001 [production]
22:36 <dancy@deploy1001> Finished scap: MW servers complaining about l10n files after .27 rollback (duration: 07m 22s) [production]
22:29 <dancy@deploy1001> Started scap: MW servers complaining about l10n files after .27 rollback [production]
22:26 <dancy@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.27 [production]
22:20 <reedy@deploy1001> Synchronized php-1.36.0-wmf.27/includes/parser/CacheTime.php: CacheTime: Extra protection for rollback unserialization T273007 (duration: 01m 00s) [production]
22:14 <dancy@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.28 [production]
22:09 <dancy@deploy1001> scap failed: average error rate on 8/9 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/83629bcb5560d11e61d3085c89dd9ed6 for details) [production]
21:42 <razzi> rebalance kafka partitions for codfw.resource_change [production]
21:40 <razzi@cumin1001> START - Cookbook sre.kafka.reboot-workers for Kafka test cluster: Reboot kafka nodes - razzi@cumin1001 [production]
20:23 <razzi> rebalance kafka partitions for eventlogging_NavigationTiming [analytics]
19:30 <razzi> rebalance kafka partitions for eqiad.mediawiki.revision-score [analytics]
19:29 <razzi> rebalance kafka partitions for codfw.mediawiki.revision-score [analytics]
19:26 <razzi@cumin1001> END (FAIL) - Cookbook sre.kafka.reboot-workers (exit_code=99) for Kafka test cluster: Reboot kafka nodes - razzi@cumin1001 [production]
19:26 <razzi@cumin1001> START - Cookbook sre.kafka.reboot-workers for Kafka test cluster: Reboot kafka nodes - razzi@cumin1001 [production]
19:14 <razzi> rebalance kafka partitions for eventlogging_CpuBenchmark [analytics]
19:11 <razzi> rebalance kafka partitions for eqiad.mediawiki.page-links-change [analytics]
19:10 <razzi> rebalance kafka partitions for codfw.mediawiki.page-links-change [analytics]
18:51 <hashar> CI slightly overloaded due to a surge of library updates but is otherwise processing changes [releng]
18:50 <hashar> CI slightly overloaded due to a surge of library updates but is otherwise processing changes [production]