tag:status.omnivore.io,2005:/historyOmnivore.io Status - Incident History2024-03-28T11:40:00-07:00Omnivore.iotag:status.omnivore.io,2005:Incident/202457732024-03-14T12:35:43-07:002024-03-14T12:35:43-07:00NCR CloudConnect API - Increased error rate<p><small>Mar <var data-var='date'>14</var>, <var data-var='time'>12:35</var> PDT</small><br><strong>Resolved</strong> - The number of timeouts has returned to normal levels.</p><p><small>Mar <var data-var='date'>14</var>, <var data-var='time'>10:46</var> PDT</small><br><strong>Identified</strong> - Beginning around 17:03 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/201003052024-02-28T11:18:00-08:002024-02-28T11:53:13-08:00MMS Outage<p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>11:18</var> PST</small><br><strong>Resolved</strong> - At around 19:18 Omnivore engineers cleaned up an MMS ingress that was believed to be unused as part of a larger release. During the release, we noticed MMS order counts dropped and immediately began to rollback. We were completely rolled back by 19:37 with ordering traffic returning back to normal at that time.<br /><br />Upon investigation, we found that a manual DNS entry was in place that referenced the removed ingress. Because this entry was not committed to our our infrastructure repository, we mistakenly believed the ingress to be unused. We will audit for any other manual DNS entries in our environment before continuing with this release.</p>tag:status.omnivore.io,2005:Incident/199899772024-02-15T01:19:59-08:002024-02-15T01:19:59-08:00Brink API - Increased error rate<p><small>Feb <var data-var='date'>15</var>, <var data-var='time'>01:19</var> PST</small><br><strong>Resolved</strong> - Around 9:15 UTC Brink API calls began to succeed.</p><p><small>Feb <var data-var='date'>15</var>, <var data-var='time'>00:07</var> PST</small><br><strong>Monitoring</strong> - Beginning around 07:40, we began seeing an increased number of errors when calling the Brink API impacting ticket reads and clock entries. API calls to ticket reads and clock entries will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our Brink contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/198666162024-02-05T14:58:37-08:002024-02-05T14:58:37-08:00NCR CloudConnect API - Increased error rate<p><small>Feb <var data-var='date'> 5</var>, <var data-var='time'>14:58</var> PST</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 20:58 UTC. We will continue to monitor for any further increases.</p><p><small>Jan <var data-var='date'>30</var>, <var data-var='time'>05:22</var> PST</small><br><strong>Identified</strong> - Beginning around 8:21 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/198460042024-01-27T09:38:58-08:002024-01-27T09:38:58-08:00Brink API - Increased error rate and Timeouts<p><small>Jan <var data-var='date'>27</var>, <var data-var='time'>09:38</var> PST</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 17:20 UTC. We will continue to monitor for any further increases.</p><p><small>Jan <var data-var='date'>27</var>, <var data-var='time'>09:05</var> PST</small><br><strong>Update</strong> - After further investigation, we have found that the timeouts are only happening when making calls to https://api22.brinkpos.net. All other Brink hosts seem to be operational.</p><p><small>Jan <var data-var='date'>27</var>, <var data-var='time'>08:53</var> PST</small><br><strong>Monitoring</strong> - Beginning around 16:50 UTC, we began seeing an increased number of timeouts when calling the Brink API impacting our Brink Locations. API calls to fetch Tickets will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our Brink contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/197260092024-01-16T12:49:41-08:002024-01-16T12:49:41-08:00Omnivore API - Degraded Performance<p><small>Jan <var data-var='date'>16</var>, <var data-var='time'>12:49</var> PST</small><br><strong>Resolved</strong> - All systems are confirmed stable and the Omnivore API is functioning normally. This incident is now resolved.</p><p><small>Jan <var data-var='date'>16</var>, <var data-var='time'>11:16</var> PST</small><br><strong>Update</strong> - We are continuing to monitor for any further issues.</p><p><small>Jan <var data-var='date'>16</var>, <var data-var='time'>11:15</var> PST</small><br><strong>Monitoring</strong> - A fix has been implemented and API connectivity has been restored. We are continuing to monitor the affects.</p><p><small>Jan <var data-var='date'>16</var>, <var data-var='time'>10:07</var> PST</small><br><strong>Update</strong> - We are still investigating issues reported with the Omnivore API, Clients may experience errors and latency when accessing panel.omnivore.io. We'll provide updates as they come in.</p><p><small>Jan <var data-var='date'>16</var>, <var data-var='time'>07:34</var> PST</small><br><strong>Investigating</strong> - We are currently investigating issues reported with the Omnivore API. Clients may experience errors and latency when accessing panel.omnivore.io.</p>tag:status.omnivore.io,2005:Incident/196920652024-02-05T10:26:40-08:002024-02-05T10:26:40-08:00Lavu Partner Outage<p><small>Feb <var data-var='date'> 5</var>, <var data-var='time'>10:26</var> PST</small><br><strong>Update</strong> - The Lavu team has communicated that they are targeting a fix for end of Q1 or early Q2.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>14:14</var> PST</small><br><strong>Update</strong> - At this time we are not expecting a resolution until at least Tuesday Jan 16th.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>13:53</var> PST</small><br><strong>Update</strong> - At around 21:47 UTC we were asked by Lavu to take further steps to prevent calls to their services. As such, we are taking action to set all Omnivore Lavu locations offline.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>13:38</var> PST</small><br><strong>Monitoring</strong> - Beginning around 21:26 UTC at the request of Lavu we have disabled webhooks and background processing for Lavu locations to aid in Lavu's outage recovery. During this time Webhooks will be delayed and data may become stale.</p>tag:status.omnivore.io,2005:Incident/196909882024-01-12T12:00:20-08:002024-01-12T12:00:20-08:00Lavu API - Increased Error Rate<p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>12:00</var> PST</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>11:19</var> PST</small><br><strong>Monitoring</strong> - The Lavu API is currently experiencing intermittent degradation. Please see their status page for details: https://status.lavu.com. We will continue to monitor until the Lavu API returns to normal functionality. There are no further technical actions we can take at this time.</p>tag:status.omnivore.io,2005:Incident/196503462024-01-08T16:59:21-08:002024-01-08T16:59:21-08:00Lavu API - Increased Error Rate<p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>16:59</var> PST</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>13:23</var> PST</small><br><strong>Monitoring</strong> - The Lavu API is currently experiencing intermittent degradation. Please see their status page for details: https://status.lavu.com. We will continue to monitor until the Lavu API returns to normal functionality. There are no further technical actions we can take at this time</p>tag:status.omnivore.io,2005:Incident/196411772024-01-07T20:34:09-08:002024-01-07T20:34:09-08:00Lavu API - Increased Error Rate<p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>20:34</var> PST</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:06</var> PST</small><br><strong>Monitoring</strong> - Monitoring - The Lavu API is currently experiencing intermittent degradation. Please see their status page for details: https://status.lavu.com. We will continue to monitor until the Lavu API returns to normal functionality. There are no further technical actions we can take at this time</p>tag:status.omnivore.io,2005:Incident/192127742023-11-24T18:31:54-08:002024-01-05T13:56:47-08:00API Outage<p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>18:31</var> PST</small><br><strong>Resolved</strong> - All systems have been functioning normally with API and Webhooks flowing normally for several hours. We will follow up with a postmortem by 12/1/2023.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>14:25</var> PST</small><br><strong>Monitoring</strong> - We have identified the issue and implemented a fix. We are monitoring systems to ensure stability. API and webhooks traffic are flowing normally.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>13:55</var> PST</small><br><strong>Investigating</strong> - We are currently investigating an issue that is affecting the Omnivore API.</p>tag:status.omnivore.io,2005:Incident/188699372023-10-20T12:36:42-07:002023-10-20T12:36:42-07:00Lavu API - Increased error rate<p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>12:36</var> PDT</small><br><strong>Resolved</strong> - The Lavu API outage was resolved around 19:00 UTC. All Omnivore API calls and webhooks involving Lavu Locations have returned to normal operation.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>11:41</var> PDT</small><br><strong>Monitoring</strong> - The Lavu API is currently experiencing an outage. Please see their status page for details: https://status.lavu.com. We will continue to monitor until access to the Lavu API has been restored. There are no further technical actions we can take at this time.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>11:28</var> PDT</small><br><strong>Investigating</strong> - Beginning around 18:27 UTC, we began seeing an increased number of errors when calling the Lavu API. API calls to fetch ticket data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are currently investigating the root cause.</p>tag:status.omnivore.io,2005:Incident/187566692023-10-11T10:40:53-07:002023-10-11T10:40:53-07:00NCR CloudConnect API - Increased error rate<p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>10:40</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 17:35 UTC. We will continue to monitor for any further increases.</p><p><small>Oct <var data-var='date'>11</var>, <var data-var='time'>10:33</var> PDT</small><br><strong>Monitoring</strong> - Beginning around 17:20 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/184933072023-09-14T06:15:15-07:002023-09-14T06:15:15-07:00NCR CloudConnect API - Increased error rate<p><small>Sep <var data-var='date'>14</var>, <var data-var='time'>06:15</var> PDT</small><br><strong>Resolved</strong> - At 5:00 UTC, calls to the NCR CloudConnect API returned to baseline. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Sep <var data-var='date'>13</var>, <var data-var='time'>12:26</var> PDT</small><br><strong>Identified</strong> - Beginning around 17:21 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. Static data populated by background tasks may become stale. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/180974602023-08-09T10:45:20-07:002023-08-09T10:45:20-07:00NCR CloudConnect API - Increased error rate<p><small>Aug <var data-var='date'> 9</var>, <var data-var='time'>10:45</var> PDT</small><br><strong>Resolved</strong> - At 17:35 UTC, we began seeing successful calls to the NCR CloudConnect API. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Aug <var data-var='date'> 9</var>, <var data-var='time'>10:28</var> PDT</small><br><strong>Identified</strong> - Beginning around 17:15 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. Static data populated by background tasks may become stale. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/178477672023-07-13T14:53:09-07:002023-07-13T14:53:09-07:00NCR CloudConnect API - Increased error rate<p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>14:53</var> PDT</small><br><strong>Resolved</strong> - At 7/13 21:08 UTC, error rates and timeouts for calls to the NCR CloudConnect API resumed nominal levels. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>13:31</var> PDT</small><br><strong>Identified</strong> - Beginning around 7/13 at 20:06 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/178370632023-07-13T06:25:33-07:002023-07-13T06:25:33-07:00NCR CloudConnect API - Increased error rate<p><small>Jul <var data-var='date'>13</var>, <var data-var='time'>06:25</var> PDT</small><br><strong>Resolved</strong> - At 7/13 05:04 UTC, error rates and timeouts for calls to the NCR CloudConnect API resumed nominal levels. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Jul <var data-var='date'>12</var>, <var data-var='time'>12:08</var> PDT</small><br><strong>Identified</strong> - Beginning around 7/12 at 18:55 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/178365122023-07-12T11:48:08-07:002023-07-12T11:48:23-07:00NCR CloudConnect API - Increased error rate<p><small>Jul <var data-var='date'>12</var>, <var data-var='time'>11:48</var> PDT</small><br><strong>Resolved</strong> - At 7/12 at 18:31 UTC, error rates and timeouts for calls to the NCR CloudConnect API resumed nominal levels. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Jul <var data-var='date'>12</var>, <var data-var='time'>10:53</var> PDT</small><br><strong>Identified</strong> - Beginning around 7/12 at 17:20 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/176996402023-06-27T10:30:41-07:002023-06-27T10:30:41-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>27</var>, <var data-var='time'>10:30</var> PDT</small><br><strong>Resolved</strong> - At 6/27 16:03 UTC, error rates and timeouts for calls to the NCR CloudConnect API resumed nominal levels. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Jun <var data-var='date'>27</var>, <var data-var='time'>07:18</var> PDT</small><br><strong>Identified</strong> - Beginning around 6/26 at 9:40 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/176865722023-06-26T04:15:32-07:002023-06-26T04:15:32-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>04:15</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 11:04 UTC. We will continue to monitor for any further increases.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>03:17</var> PDT</small><br><strong>Identified</strong> - Beginning around 09:52 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/175776812023-06-15T06:12:22-07:002023-06-15T06:12:22-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>15</var>, <var data-var='time'>06:12</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 06:22 UTC. We will continue to monitor for any further increases.</p><p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>18:14</var> PDT</small><br><strong>Identified</strong> - Beginning around 00:52 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/175770522023-06-14T17:38:27-07:002023-06-14T17:38:27-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>17:38</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 00:14 UTC. We will continue to monitor for any further increases.</p><p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>16:38</var> PDT</small><br><strong>Identified</strong> - Beginning around 22:52 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/175756642023-06-14T14:24:15-07:002023-06-14T14:24:15-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>14:24</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 21:09 UTC. We will continue to monitor for any further increases.</p><p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>13:22</var> PDT</small><br><strong>Identified</strong> - Beginning around 19:52 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/175750282023-06-14T12:14:16-07:002023-06-14T12:14:16-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>12:14</var> PDT</small><br><strong>Resolved</strong> - Error rates returned to normal levels around 18:57 UTC. We will continue to monitor for any further increases.</p><p><small>Jun <var data-var='date'>14</var>, <var data-var='time'>12:05</var> PDT</small><br><strong>Identified</strong> - Beginning around 17:17 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We are reaching out to our NCR contacts. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>tag:status.omnivore.io,2005:Incident/174893402023-06-06T08:31:11-07:002023-06-06T08:31:11-07:00NCR CloudConnect API - Increased error rate<p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>08:31</var> PDT</small><br><strong>Resolved</strong> - At 12:45 UTC, NCR CloudConnect API error rates and timeouts returned to nominal levels. We will continue to monitor the success of calls to the NCR CloudConnect API.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>05:47</var> PDT</small><br><strong>Identified</strong> - Beginning around 12:14 UTC, we began seeing an increased number of timeouts when calling the NCR CloudConnect API impacting ticket and clock entry reads. API calls to fetch ticket and clock entry data will likely fail at an increased rate and webhooks may be delayed until service is restored. We will continue to monitor for the issue to resolve. There are no further technical actions we can take to resolve the issue at this time.</p>