Pinned toot

This Mastodon account will be used for all maintenance, upgrade and downtime information about the Organise.Earth and Rebellion.Global family of servers.

Please ensure that your XR Branch designates a tech coordinator to monitor this account.

Downtime is anticipated.

Love and Code,

@xradmin

The Mattermost instance at Organise.Earth is down for 15 minutes of scheduled maintenance.

This Mastodon instance was down for some time today due to a RAM shortage. A new server will be deployed for this instance as it is growing very fast and needs its own room to grow, while maintaining separation from other XR services.

GitLab at code.organise.earth is currently down during a security maintenance upgrade.

At 20:40 UTC Organise.Earth will go down for 10 minutes while data is being moved to a new partition.

Update and data transfer is complete, with the Cloud (Nextcloud) and Base (Discourse) both upgraded.

Mattermost, Cloud and Base are all presently down for maintenance. It is taking longer than anticipated, due to some database operations needed. It's expected to be complete by 17:30 UTC, at this stage.

A new memcaching strategy has been deployed on cloud.organise.earth and cloud2.organise.earth that appears to significantly boost performance.

The situation seems to be resolved between the datacenter and Sunrise, Switzerland's largest SP.

Confirmation now that it is not a block, despite earlier reports, and that it is a problem with Internet Infrastructure more generally affecting incoming traffic to the datacenter in Switzerland atlas.ripe.net/measurements/23 Further updates forthcoming

Despite reports that it is a block, we now have reports that some US and French rebels cannot reach hosts at the datacenter on the IP layer. Investigating

Bell, a major Canadian service provider, has blocked most of the country from being able to access Organise.Earth. A small team in Canada is currently in discussion with Bell as to their reasoning and when the ban can be lifted

This Mastodon node has been down today for some time due to spurious and unforeseen RAM shortages following a staging deployment of the new Rebels Manager and excess GitLab runners consuming memory. Naturally, it was impossible to update you here during the outage. Thanks for your understanding.

Issues with sending as a result of zeroed hashes in outgoing message IDs have been resolved.

Brief disruptions for GitLab, Loomio and Base during an upgrade

Report from our Swiss datacenter as to the brief outage:

"We had a defect switch between the data centers and have just replaced it. It was "only" the cross connect between the two different sites and we are checking out why the traffic was going the wrong direction when it stopped working"

Brief disruptions (a few minutes) at the Swiss datacenter. Seems to affect all hosts. Investigating.

Show more
Mastodon

Mastodon instance for Extinction Rebellion