CINXE.COM
GitHub Enterprise Cloud - EU Status - Incident History
<?xml version="1.0" encoding="UTF-8"?> <feed xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom"> <id>tag:eu.githubstatus.com,2005:/history</id> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com"/> <link rel="self" type="application/atom+xml" href="https://eu.githubstatus.com/history.atom"/> <title>GitHub Enterprise Cloud - EU Status - Incident History</title> <updated>2024-12-02T07:41:56Z</updated> <author> <name>GitHub Enterprise Cloud - EU</name> </author> <entry> <id>tag:eu.githubstatus.com,2005:Incident/22187030</id> <published>2024-09-16T22:08:38Z</published> <updated>2024-10-29T06:17:59Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/y5sxgfqxmn0c"/> <title>Incident with Pages and Actions</title> <content type="html"><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>22:08</var> UTC</small><br><strong>Resolved</strong> - On September 16, 2024, between 21:11 UTC and 22:20 UTC, Actions and Pages services were degraded. Customers who deploy Pages from a source branch experienced delayed runs. Approximately 1,100 runs were delayed long enough to get marked as abandoned. The runs that weren't abandoned completed successfully after we recovered from the incident. Actions jobs experienced average delays of 23 minutes, with some jobs experiencing delays as high as 45 minutes. During the course of the incident, 17% of runs were delayed by more than 5 minutes. At peak, as many as 80% of runs experienced delays exceeding 5 minutes. The root cause was a misconfiguration in the service that manages runner connections, which caused CPU throttling and led to a performance degradation in that service.<br /><br />We mitigated the incident by diverting runner connections away from the misconfigured nodes. We are working to improve our internal monitoring and alerting to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>21:54</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>21:37</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21967458</id> <published>2024-08-29T21:54:47Z</published> <updated>2024-10-29T06:17:59Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/jfvxhjr9mgfd"/> <title>Disruption with some GitHub services</title> <content type="html"><p><small>Aug <var data-var='date'>29</var>, <var data-var='time'>21:54</var> UTC</small><br><strong>Resolved</strong> - On August 29th, 2024, from 16:56 UTC to 21:42 UTC, we observed an elevated rate of traffic on our public edge, which triggered GitHub鈥檚 rate limiting protections. This resulted in <0.1% of users being identified as false-positives, which they experienced as intermittent connection timeouts. At 20:59 UTC the engineering team improved the system to remediate the false-positive identification of user traffic, and return to normal traffic operations.</p><p><small>Aug <var data-var='date'>29</var>, <var data-var='time'>20:43</var> UTC</small><br><strong>Update</strong> - While we have seen a reduction in reports of users having connectivity issues to GitHub.com, we are still investigating the issue.</p><p><small>Aug <var data-var='date'>29</var>, <var data-var='time'>20:07</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with customers reporting temporary issues accessing GitHub.com</p><p><small>Aug <var data-var='date'>29</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Update</strong> - We are getting reports of users who aren't able to access GitHub.com and are investigating.</p><p><small>Aug <var data-var='date'>29</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21952390</id> <published>2024-08-28T23:43:58Z</published> <updated>2024-10-29T06:18:00Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/0dcn77ktj3y6"/> <title>Disruption with some GitHub services</title> <content type="html"><p><small>Aug <var data-var='date'>28</var>, <var data-var='time'>23:43</var> UTC</small><br><strong>Resolved</strong> - On August 28, 2024, from 21:40 to 23:43 UTC, up to 25% of unauthenticated dotcom traffic in SE Asia (representing <1% of global traffic) encountered HTTP 500 errors. We observed elevated error rates at one of our global points of presence, where geo-DNS health checks were failing. We identified unhealthy cloud hardware in the region, indicated by abnormal CPU utilization patterns. As a result, we drained the site at 23:26 UTC, which promptly restored normal traffic operations.</p><p><small>Aug <var data-var='date'>28</var>, <var data-var='time'>22:19</var> UTC</small><br><strong>Update</strong> - We are seeing cases of user impact in some locations are continuing to investigate.</p><p><small>Aug <var data-var='date'>28</var>, <var data-var='time'>22:02</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21748203</id> <published>2024-08-15T00:30:15Z</published> <updated>2024-10-29T06:18:00Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/c5ccqg2rn3r1"/> <title>Incident with Pull Requests, Pages and Actions</title> <content type="html"><p><small>Aug <var data-var='date'>15</var>, <var data-var='time'>00:30</var> UTC</small><br><strong>Resolved</strong> - On August 14, 2024 between 23:02 UTC and 23:38 UTC, all GitHub services were inaccessible for all users.<br /> <br />This was due to a configuration change that impacted traffic routing within our database infrastructure, resulting in critical services unexpectedly losing database connectivity. There was no data loss or corruption during this incident.<br /><br />We mitigated the incident by reverting the change and confirming restored connectivity to our databases. At 23:38 UTC, traffic resumed and all services recovered to full health. Out of an abundance of caution, we continued to monitor before resolving the incident at 00:30 UTC on August 15th, 2024.<br /><br />We will provide more details as our investigation proceeds and will post additional updates in the coming days.<br /></p><p><small>Aug <var data-var='date'>15</var>, <var data-var='time'>00:13</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>23:45</var> UTC</small><br><strong>Update</strong> - The database infrastructure change is being rolled back. We are seeing improvements in service health and are monitoring for full recovery.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>23:29</var> UTC</small><br><strong>Update</strong> - We are experiencing interruptions in multiple public GitHub services. We suspect the impact is due to a database infrastructure related change that we are working on rolling back.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>23:19</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded availability. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>23:16</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with GitHub.com and GitHub API. We will continue to keep users updated on progress towards mitigation.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>23:13</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21573640</id> <published>2024-07-30T22:10:21Z</published> <updated>2024-10-29T06:18:00Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/v0s8w2c1wlm9"/> <title>Actions runs using large runners delayed for some customers</title> <content type="html"><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Resolved</strong> - On July 30th, 2024, between 13:25 UTC and 18:15 UTC, customers using Larger Hosted Runners may have experienced extended queue times for jobs that depended on a Runner with VNet Injection enabled in a virtual network within the East US 2 region. Runners without VNet Injection or those with VNet Injection in other regions were not affected. The issue was caused due to an outage in a third party provider blocking a large percentage of VM allocations in the East US 2 region. Once the underlying issue with the third party provider was resolved, job queue times went back to normal. We are exploring the addition of support for customers to define VNet Injection Runners with VNets across multiple regions to minimize the impact of outages in a single region.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>22:09</var> UTC</small><br><strong>Update</strong> - The mitigation for larger hosted runners has continued to be stable and all job delays are less than 5 minutes. We will be resolving this incident.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>21:44</var> UTC</small><br><strong>Update</strong> - We are continuing to hold this incident open while the team ensures that mitigation put in place is stable.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>21:00</var> UTC</small><br><strong>Update</strong> - Larger hosted runners job starts are stable and starting within expected timeframes. We are monitoring job start times in preparation to resolve this incident. No enqueued larger hosted runner jobs were dropped during this incident.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Update</strong> - Over the past 30 minutes, all larger hosted runner jobs have started in less than 5 minutes. We are continuing to investigate delays in larger hosted runner job starts</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>19:40</var> UTC</small><br><strong>Update</strong> - We are still investigating delays in customer鈥檚 larger hosted runner job starts. Nearly all jobs are starting under 5 minutes. Only 1 customer larger hosted runner job was delayed by more than 5 minutes in the past 30 minutes.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>19:04</var> UTC</small><br><strong>Update</strong> - We are seeing improvements to the job start times for larger hosted runners for customers. In the last 30 minutes no customer jobs are delayed more than 5 minutes. We will continue monitoring for full recovery.</p><p><small>Jul <var data-var='date'>30</var>, <var data-var='time'>18:19</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21527661</id> <published>2024-07-25T21:05:02Z</published> <updated>2024-10-29T06:18:00Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/q1xfsnrh9npj"/> <title>Linking internal teams to external IDP groups was broken for some users between 15:17-20:44 UTC</title> <content type="html"><p><small>Jul <var data-var='date'>25</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Resolved</strong> - Between July 24th, 2024 at 15:17 UTC and July 25th, 2024 at 21:04 UTC, the external identities service was degraded and prevented customers from linking teams to external groups on the create/edit team page. Team creation and team edits would appear to function as normal, but the selected group would not be linked to the team after form submission. This was due to a bug in the Primer experimental SelectPanel component that was mistakenly rolled out to customers via a feature flag.<br /><br />We mitigated the incident by scaling the feature flag back down to 0% of actors.<br /><br />We are making improvements to our release process and test coverage to avoid similar incidents in the future.</p><p><small>Jul <var data-var='date'>25</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21398870</id> <published>2024-07-13T19:27:04Z</published> <updated>2024-10-29T06:18:00Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/6dh5nl8kdrnz"/> <title>Incident with Copilot</title> <content type="html"><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>19:27</var> UTC</small><br><strong>Resolved</strong> - On July 13, 2024 between 00:01 and 19:27 UTC the Copilot service was degraded. During this time period, Copilot code completions error rate peaked at 1.16% and Copilot Chat error rate peaked at 63%. Between 01:00 and 02:00 UTC we were able to reroute traffic for Chat to bring error rates below 6%. During the time of impact customers would have seen delayed responses, errors, or timeouts during requests. GitHub code scanning autofix jobs were also delayed during this incident. <br /><br />A resource cleanup job was scheduled by <a href="https://azure.status.microsoft/en-us/status/history?trackingid=4L44-3F0">Azure OpenAI (AOAI) service early July 13th</a> targeting a resource group thought to only contain unused resources. This resource group unintentionally contained critical, still in use, resources that were then removed. The cleanup job was halted before removing all resources in the resource group. Enough resources remained that GitHub was able to mitigate while resources were reconstructed.<br /><br />We are working with AOAI to ensure mitigation is in place to prevent future impact. In addition, we will improve traffic rerouting processes to reduce time to mitigate in the future.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>19:26</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>18:01</var> UTC</small><br><strong>Update</strong> - Our upstream provider continues to recover and we expect services to return to normal as more progress is made. We will provide another update by 20:00 UTC.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>16:09</var> UTC</small><br><strong>Update</strong> - Our upstream provider is making good progress recovering and we are validating that services are nearing normal operations. We will provide another update by 18:00 UTC.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>11:18</var> UTC</small><br><strong>Update</strong> - Our upstream provider is gradually recovering the service. We will provide another update at 23:00 UTC.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>03:50</var> UTC</small><br><strong>Update</strong> - We are continuing to wait on our upstream provider to see full recovery. We will provide another update at 11:00 UTC</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>03:20</var> UTC</small><br><strong>Update</strong> - The error rate for Copilot chat requests remains steady at less than 10%. We are continuing to investigate with our upstream provider.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>02:20</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>02:19</var> UTC</small><br><strong>Update</strong> - We have applied several mitigations to Copilot chat, reducing errors to less than 10% of all chat requests. We are continuing to investigate the issue with our upstream provider.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>01:32</var> UTC</small><br><strong>Update</strong> - Copilot chat is experiencing degraded performance, impacting up to 60% of all chat requests. We are continuing to investigate the issue with our upstream provider.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>00:49</var> UTC</small><br><strong>Update</strong> - Copilot chat is currently experiencing degraded performance, impacting up to 60% of all chat requests. We are investigating the issue.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>00:29</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded availability. We are continuing to investigate.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>00:18</var> UTC</small><br><strong>Update</strong> - Copilot API chat experiencing significant failures to backend services</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>00:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21380972</id> <published>2024-07-11T15:21:18Z</published> <updated>2024-10-29T06:18:01Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/bg1d9hk6wz50"/> <title>Incident with Copilot</title> <content type="html"><p><small>Jul <var data-var='date'>11</var>, <var data-var='time'>15:21</var> UTC</small><br><strong>Resolved</strong> - On July 11, 2024, between 10:20 UTC and 14:00 UTC Copilot Chat was degraded and experienced intermittent timeouts. This only impacted requests routed to one of our service region providers. The error rate peaked at 10% for all requests and 9% of users. This was due to host upgrades in an upstream service provider. While this was a planned event, processes and tooling was not in place to anticipate and mitigate this downtime. <br /><br /><br />We are working to improve our processes and tooling for future planned events and escalation paths with our upstream providers.<br /></p><p><small>Jul <var data-var='date'>11</var>, <var data-var='time'>15:21</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Jul <var data-var='date'>11</var>, <var data-var='time'>13:02</var> UTC</small><br><strong>Update</strong> - Copilot's Chat functionality is experiencing intermittent timeouts, we are investigating the issue.</p><p><small>Jul <var data-var='date'>11</var>, <var data-var='time'>13:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21349016</id> <published>2024-07-08T19:45:20Z</published> <updated>2024-10-29T06:18:01Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/n5j315mvx9pl"/> <title>Incident with Issues and Pages</title> <content type="html"><p><small>Jul <var data-var='date'> 8</var>, <var data-var='time'>19:45</var> UTC</small><br><strong>Resolved</strong> - On July 8th, 2024, between 18:18 UTC and 19:11 UTC, various services relying on static assets were degraded, including user uploaded content on github.com, access to docs.github.com and Pages sites, and downloads of Release assets and Packages. <br /><br />The outage primarily affected users in the vicinity of New York City, USA, due to a local CDN disruption. <br /><br />Service was restored without our intervention.<br /><br />We are working to improve our external monitoring, which failed to detect the issue and will be evaluating a backup mechanism to keep critical services available, such as being able to load assets on GitHub.com, in the event of an outage with our CDN.<br /></p><p><small>Jul <var data-var='date'> 8</var>, <var data-var='time'>19:44</var> UTC</small><br><strong>Update</strong> - Our assets are serving normally again and all impact is resolved.</p><p><small>Jul <var data-var='date'> 8</var>, <var data-var='time'>19:16</var> UTC</small><br><strong>Update</strong> - We are beginning to see recovery of our assets and are monitoring for additional impact.</p><p><small>Jul <var data-var='date'> 8</var>, <var data-var='time'>19:01</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21148993</id> <published>2024-06-18T18:09:44Z</published> <updated>2024-10-29T06:18:01Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/pg5z3txs4288"/> <title>We are investigating degraded performance for GitHub Enterprise Importer migrations</title> <content type="html"><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>18:09</var> UTC</small><br><strong>Resolved</strong> - Starting on June 18th from 4:59pm UTC to 6:06pm UTC, customer migrations were unavailable and failing. This impacted all in-progress migration during that time. This issue was due to an incorrect configuration on our Database cluster. We mitigated the issue by remediating the database configuration and are working with stakeholders to ensure safeguards are in place to prevent the issue going forward.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Update</strong> - We have applied a configuration change to our migration service as a mitigation and are beginning to see recovery and in increase in successful migration runs. We are continuing to monitor.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Update</strong> - We have identified what we believe to be the source of the migration errors and are applying a mitigation, which we expect will begin improving migration success rate.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:15</var> UTC</small><br><strong>Update</strong> - We are investigating degraded performance for GitHub Enterprise Importer migrations. Some customers may see an increase in failed migrations. Investigation is ongoing.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:14</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/21009830</id> <published>2024-06-05T19:27:21Z</published> <updated>2024-10-29T06:18:01Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/khh5n39b43ln"/> <title>We are investigating reports of degraded performance.</title> <content type="html"><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>19:27</var> UTC</small><br><strong>Resolved</strong> - On June 5, 2024, between 17:05 UTC and 19:27 UTC, the GitHub Issues service was degraded. During that time, no events related to projects were displayed on issue timelines. These events indicate when an issue was added to or removed from a project and when their status changed within a project. The data couldn鈥檛 be loaded due to a misconfiguration of the service backing these events. This happened after a scheduled secret rotation when the wrongly configured service continued using the old secrets which had expired. <br /><br />We mitigated the incident by remediating the service configuration and have started simplifying the configuration to avoid similar misconfigurations in the future.</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>17:22</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/20833983</id> <published>2024-05-20T17:05:48Z</published> <updated>2024-10-29T06:18:01Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/bzyzwp8j2hd0"/> <title>We are investigating reports of degraded performance.</title> <content type="html"><p><small>May <var data-var='date'>20</var>, <var data-var='time'>17:05</var> UTC</small><br><strong>Resolved</strong> - Between May 19th 3:40AM UTC and May 20th 5:40PM UTC the service responsible for rendering Jupyter notebooks was degraded. During this time customers were unable to render Jupyter Notebooks.<br /><br />This occurred due to an issue with a Redis dependency which was mitigated by restarting. An issue with our monitoring led to a delay in our response. We are working to improve the quality and accuracy of our monitors to reduce the time to detection.</p><p><small>May <var data-var='date'>20</var>, <var data-var='time'>17:01</var> UTC</small><br><strong>Update</strong> - We are beginning to see recovery rendering Jupyter notebooks and are continuing to monitor.</p><p><small>May <var data-var='date'>20</var>, <var data-var='time'>16:52</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/20489675</id> <published>2024-04-09T20:17:07Z</published> <updated>2024-10-29T06:18:02Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/nz0xn4wggzkt"/> <title>Incident with Actions</title> <content type="html"><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Resolved</strong> - On April 9, 2024, between 18:00 and 20:17 UTC, Actions was degraded and had failures for new and existing customers. During this time, Actions failed to start for 5,426 new repositories, and 1% of runs for existing customers were delayed, with half of those failing due to an infrastructure error.<br /><br />The root cause was an expired certificate which caused authentication to fail between internal services. The incident was mitigated once the cert was rotated.<br /><br />We are working to improve our automation to ensure certs are rotated before expiration.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>20:12</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>19:43</var> UTC</small><br><strong>Update</strong> - We continue to work to resolve issues with repositories not being able to enable Actions and Actions network configuration setup not working properly. We have confirmed a fix and are in the process of deploying it to production. Another update will be shared within the next 30 minutes.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>19:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/20455977</id> <published>2024-04-05T09:18:11Z</published> <updated>2024-10-29T06:18:02Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/6dvnx6lf5s14"/> <title>We are investigating reports of degraded performance.</title> <content type="html"><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - On April 5, 2024, between 8:11 and 8:58 UTC a number of GitHub services were degraded, returning error responses. Web request error rate peaked at 6%, API request error rate peaked at 10%. Actions had 103,660 workflow runs fail to start. <br /><br />A database load balancer change caused connection failures in one of our three data centers to various critical database clusters. The incident was mitigated once that change was rolled back.<br /><br />We have updated our deployment pipeline to better detect this problem in earlier stages of rollout to reduce impact to end users. <br /></p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:54</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/20455838</id> <published>2024-04-05T08:53:39Z</published> <updated>2024-10-29T06:18:02Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/09j86th51zpm"/> <title>We are investigating reports of degraded performance.</title> <content type="html"><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:31</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10572303</id> <published>2022-07-14T01:09:39Z</published> <updated>2024-10-29T06:18:02Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/qnywzcvw7py6"/> <title>Incident affecting API Requests</title> <content type="html"><p><small>Jul <var data-var='date'>14</var>, <var data-var='time'>01:09</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jul <var data-var='date'>14</var>, <var data-var='time'>00:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10551720</id> <published>2022-07-12T16:07:20Z</published> <updated>2024-10-29T06:18:03Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/s5pbg93m6398"/> <title>Incident affecting API Requests</title> <content type="html"><p><small>Jul <var data-var='date'>12</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jul <var data-var='date'>12</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10408896</id> <published>2022-06-26T20:30:57Z</published> <updated>2024-10-29T06:18:03Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/np67c0f4nk7g"/> <title>Incident affecting infrastructure</title> <content type="html"><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>20:30</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>20:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for infrastructure</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10380694</id> <published>2022-06-22T17:26:13Z</published> <updated>2024-10-29T06:18:03Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/hvjq87p6l9d7"/> <title>Incident affecting GitHub Actions</title> <content type="html"><p><small>Jun <var data-var='date'>22</var>, <var data-var='time'>17:26</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>22</var>, <var data-var='time'>17:24</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for GitHub Actions</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10375790</id> <published>2022-06-22T05:53:11Z</published> <updated>2024-10-29T06:18:03Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/lmlxj3jnfm82"/> <title>Incident affecting GitHub Packages</title> <content type="html"><p><small>Jun <var data-var='date'>22</var>, <var data-var='time'>05:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>22</var>, <var data-var='time'>05:16</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for GitHub Packages</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10303545</id> <published>2022-06-15T16:17:17Z</published> <updated>2024-10-29T06:18:03Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/l0gjy0tcv1wp"/> <title>Incident affecting GitHub Pages</title> <content type="html"><p><small>Jun <var data-var='date'>15</var>, <var data-var='time'>16:17</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>15</var>, <var data-var='time'>16:01</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for GitHub Pages</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10271542</id> <published>2022-06-13T23:48:23Z</published> <updated>2024-10-29T06:18:04Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/9rfr9vqps0yn"/> <title>Incident affecting infrastructure</title> <content type="html"><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>23:48</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>22:15</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for infrastructure</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10267570</id> <published>2022-06-13T17:48:22Z</published> <updated>2024-10-29T06:18:04Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/kgrdcvrq1vc0"/> <title>Incident affecting GitHub Pages</title> <content type="html"><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>16:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for GitHub Pages</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10267473</id> <published>2022-06-13T17:47:18Z</published> <updated>2024-10-29T06:18:04Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/fj86dn7k8vgy"/> <title>Incident affecting GitHub Actions</title> <content type="html"><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>17:47</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'>13</var>, <var data-var='time'>16:50</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for GitHub Actions</p></content> </entry> <entry> <id>tag:eu.githubstatus.com,2005:Incident/10188894</id> <published>2022-06-08T21:04:10Z</published> <updated>2024-10-29T06:18:04Z</updated> <link rel="alternate" type="text/html" href="https://eu.githubstatus.com/incidents/1ttw8d4qlc0h"/> <title>Incident affecting Git Operations</title> <content type="html"><p><small>Jun <var data-var='date'> 8</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jun <var data-var='date'> 8</var>, <var data-var='time'>20:14</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Git Operations</p></content> </entry> </feed>