CINXE.COM
Heroku
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> <channel> <title>Heroku</title> <link>http://blog.heroku.com</link> <description>The Heroku Blog</description> <ttl>60</ttl> <item> <title>Router 2.0 and HTTP/2 Now Generally Available </title> <link>https://blog.heroku.com/router-2dot0-http2-now-generally-available</link> <pubDate>Thu, 21 Nov 2024 18:13:19 GMT</pubDate> <guid>https://blog.heroku.com/router-2dot0-http2-now-generally-available</guid> <description><p>Back in September 2023, we announced our <a href="https://blog.heroku.com/router-2dot0-the-road-to-beta">Public Beta</a> for our new Common Runtime router: <a href="https://devcenter.heroku.com/articles/http-routing#router-2-0">Router 2.0</a>.</p> <p>Now generally available, Router 2.0 will replace the legacy Common Runtime router in the coming months, and bring new networking capabilities and performance to our customers. </p> <p>The beta launch of Router 2.0 also enabled us to <a href="https://blog.heroku.com/heroku-http2-public-beta">deliver HTTP/2</a> to our customers. And now, because Router 2.0 has become generally available, HTTP/2 is also generally available for all common runtime customers and even <a href="https://devcenter.heroku.com/articles/routing-in-private-spaces#http-2">Private Spaces</a> customers too.</p> <p>We’re excited to have Router 2.0 be the foundation for Heroku to deliver new cutting edge networking features and performance improvements for years to come.</p> <!-- more --> <h2 class="anchored"> <a name="why-a-new-router" href="#why-a-new-router">Why a New Router?</a> </h2> <p>Why build a new router instead of improving the existing one? Our primary motivator has been faster and safer delivery of new routing features for our customers. You can see the full rationale behind the change in our <a href="https://blog.heroku.com/router-2dot0-the-road-to-beta">Public Beta post</a>.</p> <h2 class="anchored"> <a name="lessons-learned-from-public-beta" href="#lessons-learned-from-public-beta">Lessons Learned from Public Beta</a> </h2> <p>Over the past months, Router 2.0 has been available in public beta, allowing us to gather valuable insights and iterate on its design. Because of early adopter customers and a wealth of feedback through our public roadmap, we were able to make dozens of improvements to the Router and ensure it was fully vetted before promoting it to a GA state. </p> <p>We made all sorts of improvements during that time, and all of them were fairly straight-forward with one exception involving <a href="https://github.com/puma/puma">Puma</a>-based applications. Through our investigations, we actually discovered a bug in Puma itself, and were able to contribute back to the community to get it resolved. </p> <p>The in-depth analysis below showcases the engineering investigation that took place during the Beta period and the amount of rigorous testing that was done to ensure our new platform met the level of performance and trust that our customers expect. </p> <p> <a class="btn btn-lg btn-primary-lightning" href="https://blog.heroku.com/pumas-routers-keepalives-ohmy" style="text-decoration: none;">Pumas, Routers, and Keepalives-Oh My!</a> </p> <h2 class="anchored"> <a name="tips-and-tricks-for-leveraging-router-2-0" href="#tips-and-tricks-for-leveraging-router-2-0">Tips and Tricks for Leveraging Router 2.0</a> </h2> <p>Ready to try Router 2.0? Well here are some helpful tips &amp; tricks from the folks that know it best:</p> <p> <a class="btn btn-lg btn-primary-lightning" href="https://blog.heroku.com/tips-tricks-router-2dot0-migration" style="text-decoration: none;">Tips &amp; Tricks for Migration to Router 2.0</a> </p> <h2 class="anchored"> <a name="the-power-of-http-2" href="#the-power-of-http-2">The Power of HTTP/2</a> </h2> <p>Starting today, HTTP/2 support is now generally available for both Common Runtime customers and <a href="https://www.heroku.com/private-spaces">Private Spaces</a> customers. </p> <p>HTTP/2 support is one of the <a href="https://github.com/heroku/roadmap/issues/34">most requested</a> and desired improvements for the <a href="https://www.heroku.com/platform">Heroku platform</a>. HTTP/2 can be significantly faster than HTTP 1.1 by introducing features like multiplexing and header compression to reduce latency and therefore improve the end-user experience of Heroku apps. We were excited to bring the benefits of HTTP/2 to all Heroku customers. </p> <p>You can find even more information about the benefits of HTTP/2 and how it works on Heroku from our <a href="https://blog.heroku.com/heroku-http2-public-beta">Public Beta Launch Blog</a>. </p> <p>Stay tuned for an upcoming blog post and demo showcasing the observable performance improvements when enabling HTTP/2 for your web application!</p> <h2 class="anchored"> <a name="get-started-today" href="#get-started-today">Get Started Today</a> </h2> <h3 class="anchored"> <a name="enable-router-2-0" href="#enable-router-2-0">Enable Router 2.0</a> </h3> <p>To start routing web requests through Router 2.0 for your Common Runtime app simply run the command: </p> <pre><code class="language-bash">$ heroku features:enable http-routing-2-dot-0 -a &lt;app name&gt; </code></pre> <h3 class="anchored"> <a name="enable-http-2" href="#enable-http-2">Enable HTTP/2</a> </h3> <p><u>Common Runtime:</u></p> <p>HTTP/2 is now enabled by default on Router 2.0. If you follow the same command above, your application will begin to handle HTTP/2 traffic.</p> <p>A valid TLS certificate is required for HTTP/2. We recommend using <a href="https://devcenter.heroku.com/articles/automated-certificate-management">Heroku Automated Certificate Management</a>. </p> <p>In the Common Runtime, we support HTTP/2 on custom domains, but not on the built-in <code>&lt;app-name-cff7f1443a49&gt;.herokuapp.com</code> domain.</p> <p>To disable HTTP/2, while still using Router 2.0, you can use the command: </p> <pre><code class="language-bash">heroku labs:enable http-disable-http2 -a &lt;app name&gt; </code></pre> <p><u>Private Spaces:</u></p> <p>To enable HTTP/2 for a Private Spaces app, you can use the command: </p> <pre><code class="language-bash">$ heroku features:enable spaces-http2 -a &lt;app name&gt; </code></pre> <p>In Private Spaces, we support HTTP/2 on both custom domains and the built-in default app domain.</p> <p>To disable HTTP/2, simply disable the Heroku feature <code>spaces-http2</code> flag on your app.</p> <h2 class="anchored"> <a name="the-exciting-future-of-heroku-networking" href="#the-exciting-future-of-heroku-networking">The Exciting Future of Heroku Networking</a> </h2> <p>We’re really excited to have brought this entire new routing platform online through a rigorously tested beta period. We appreciate all of the patience and support from our customers as we built out Router 2.0 and its associated features. </p> <p>This is only the beginning. Now that Router 2.0 is GA, we can start on the next aspects of our roadmap to bring even more innovative and modern features online like enhanced Network Error Logging, HTTP/2 all the way to the dyno, HTTP/3, mTLS, and others. </p> <p>We'll continue monitoring the <a href="https://github.com/heroku/roadmap">public roadmap</a> and your feedback as we explore future networking and routing enhancements, especially our continued research on expanding our networking capabilities. </p> </description> <author>Ethan Limchayseng</author> </item> <item> <title>Pumas, Routers & Keepalives—Oh my!</title> <link>https://blog.heroku.com/pumas-routers-keepalives-ohmy</link> <pubDate>Thu, 21 Nov 2024 18:13:05 GMT</pubDate> <guid>https://blog.heroku.com/pumas-routers-keepalives-ohmy</guid> <description><p><a href="https://devcenter.heroku.com/changelog-items/3063">This week</a>, Heroku made <a href="https://devcenter.heroku.com/articles/http-routing#legacy-router-and-router-2-0">Router 2.0</a> generally available, bringing features like <a href="https://devcenter.heroku.com/changelog-items/3066">HTTP/2</a>, performance improvements and reliability enhancements out of the <a href="https://devcenter.heroku.com/articles/heroku-beta-features">beta program</a>!</p> <p>Throughout the Router 2.0 beta, our engineering team has addressed several bugs, all fairly straight-forward with one exception involving <a href="https://github.com/puma/puma">Puma</a>-based applications. A small subset of Puma applications would experience increased <a href="https://devcenter.heroku.com/articles/metrics#response-time">response times</a> upon enabling the Router 2.0 flag, reflected in customers’ Heroku dashboards and router logs. After thorough router investigation and peeling back Puma’s server code, we realized what we had stumbled upon was not actually a Router 2.0 performance issue. The root cause was a bug in Puma! This blog takes a deep dive into that investigation, including some tips for avoiding the bug on the Heroku platform while a fix in Puma is being developed. If you’d like a shorter ride (aka. the TL;DR), skip to <a href="#the-solution">The Solution</a> section of this blog. For the full story and all the technical nitty gritty, read on.</p> <!-- more --> <h2 class="anchored"> <a name="reproduction" href="#reproduction">Reproduction</a> </h2> <p>The long response times issue first surfaced through a customer support ticket for an application running a Puma + Rails web server. As the customer reported, in high load scenarios, the performance differences between Router 2.0 and the legacy router were disturbingly stark. An application scaled to 2 <code>Standard-1X</code> dynos would handle 30 requests per second just fine through the legacy router. Through Router 2.0, the same traffic would produce very long tail <a href="https://devcenter.heroku.com/articles/metrics#response-time">response times</a> (95th and 99th percentiles). Under enough load, <a href="https://devcenter.heroku.com/articles/metrics#throughput">throughput</a> would drop and requests would fail with <a href="https://devcenter.heroku.com/articles/error-codes#h12-request-timeout"><code>H12: Request Timeout</code></a>. The impact was immediate upon enabling the <code>http-routing-2-dot-0</code> feature flag:</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732046033-bad_latencies_r2_enabled.png" alt="bad_latencies_r2_enabled"></p> <p>At first, our team of engineers had difficulty reproducing the above, despite running a similarly configured Puma + Rails app on the same framework and language versions. We consistently saw good response times from our app.</p> <p>Then we tried varying the Rails application’s internal response time. We injected some artificial server lag of 200 milliseconds and that’s when things really took off:</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732046066-bad_latencies_200ms.png" alt="bad_latencies_200ms"></p> <p>This was quite the realization! In staging environments, Router 2.0 <a href="https://blog.heroku.com/router-2dot0-the-road-to-beta#load-test-continuously">is subject to automatic load tests</a> that run continuously, at varied request rates, body sizes, protocol versions. etc.. These request rates routinely reach much higher levels than 30 requests per second. However, the target applications of these load tests did not include a Heroku app running Puma + Rails with any significant server-side lag.</p> <h2 class="anchored"> <a name="investigation" href="#investigation">Investigation 🔍</a> </h2> <p>With a reproduction in-hand, we were now in a position to investigate the high response times. We spun up our test app in a staging environment and started injecting a steady load of 30 requests per second.</p> <p>Our first thought was that perhaps the legacy router is faster at forwarding requests to the dyno because its underlying TCP client manages connections in a way that plays nicer with the Puma server. We hopped on a router instance and began dumping <code>netstat</code> connection states for one of our Puma app's web dynos :</p> <p>Connections from <strong>legacy router → dyno</strong></p> <pre><code class="language-bash">root@router.1019708 | # netstat | grep ip-10-1-38-72.ec2:11059 tcp 0 0 ip-10-1-87-57.ec2:28631 ip-10-1-38-72.ec2:11059 ESTABLISHED tcp 0 0 ip-10-1-87-57.ec2:30717 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:15205 ip-10-1-38-72.ec2:11059 ESTABLISHED tcp 0 0 ip-10-1-87-57.ec2:17919 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:24521 ip-10-1-38-72.ec2:11059 TIME_WAIT </code></pre> <p>Connections from <strong>Router 2.0 → dyno</strong></p> <pre><code class="language-bash">root@router.1019708 | # netstat | grep ip-10-1-38-72.ec2:11059 tcp 0 0 ip-10-1-87-57.ec2:24630 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:22476 ip-10-1-38-72.ec2:11059 ESTABLISHED tcp 0 0 ip-10-1-87-57.ec2:38438 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:38444 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:31034 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:38448 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:41882 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:23622 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:31060 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:31042 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:23648 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:31054 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:23638 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:38436 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:31064 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:22492 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:38414 ip-10-1-38-72.ec2:11059 TIME_WAIT tcp 0 0 ip-10-1-87-57.ec2:42218 ip-10-1-38-72.ec2:11059 ESTABLISHED tcp 0 0 ip-10-1-87-57.ec2:41880 ip-10-1-38-72.ec2:11059 TIME_WAIT </code></pre> <p>In the legacy router case, it seemed like there were fewer connections sitting in <code>TIME_WAIT</code>. This TCP state is a normal stop point along the lifecycle of a connection. It means the remote host (dyno) has sent a <code>FIN</code> indicating the connection should be closed. The local host (router) has sent back an <code>ACK</code>, acknowledging the connection is closed.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732046205-time_wait.png" alt="time_wait"></p> <p>The connection hangs out for some time in <code>TIME_WAIT</code>, with the value varying among operating systems. The Linux default is 2 minutes. Once that timeout is hit, the socket is reclaimed and the router is free to re-use the <code>address + port</code> combination for a new connection.</p> <p>With this understanding, we formed a hypothesis that the Router 2.0 HTTP client was churning through connections really quickly. Perhaps the new router was opening connections and forwarding requests at a faster rate than the legacy router, thus overwhelming the dyno.</p> <p>Router 2.0 is written in Go and relies upon the language’s standard HTTP package. Some research turned up various tips for configuring Go’s <code>http.Transport</code> to avoid connection churn. The main recommendation involved tuning <a href="https://pkg.go.dev/net/http#Transport.MaxIdleConnsPerHost"><code>MaxIdleConnsPerHost</code></a> . Without explicitly setting this configuration, the default value of 2 is used.</p> <pre><code class="language-javascript">type Transport struct { // MaxIdleConnsPerHost, if non-zero, controls the maximum idle // (keep-alive) connections to keep per-host. If zero, // DefaultMaxIdleConnsPerHost is used. MaxIdleConnsPerHost int ... } const DefaultMaxIdleConnsPerHost = 2 </code></pre> <p>The problem with a low cap on idle connections per host is that it forces Go to close connections more often. For example, if this value is set to a higher value, like 10, our HTTP transport will keep up to 10 idle connections for this dyno in the pool. Only when the 11th connection goes idle does the transport start closing connections. With the number limited to 2, the transport will close more connections which also means opening more connections to our dyno. This could put strain on the dyno as it requires Puma to spend more time handling connections and less time answering requests.</p> <p>We wanted to test our hypothesis, so we set <code>DefaultMaxIdleConnsPerHost: 100</code> on the Router 2.0 transport in staging. The connection distribution did change and now Router 2.0 connections were more stable than before:</p> <pre><code class="language-bash">root@router.1020195 | # netstat | grep 'ip-10-1-2-62.ec2.:37183' tcp 0 0 ip-10-1-34-185.ec:36350 ip-10-1-2-62.ec2.:37183 ESTABLISHED tcp 0 0 ip-10-1-34-185.ec:11956 ip-10-1-2-62.ec2.:37183 ESTABLISHED tcp 0 0 ip-10-1-34-185.ec:51088 ip-10-1-2-62.ec2.:37183 ESTABLISHED tcp 0 0 ip-10-1-34-185.ec:60876 ip-10-1-2-62.ec2.:37183 ESTABLISHED </code></pre> <p>To our dismay, this had zero positive effect on our tail response times. We were still seeing the 99th percentile at well over 2 seconds for a Rails endpoint that should only take about 200 milliseconds to respond.</p> <p>We tried changing some other configurations on the Go HTTP transport, but saw no improvement. After several rounds of updating a config, waiting for the router artifact to build, and then waiting for the deployment to our staging environment, we began to wonder—can we reproduce this issue locally?</p> <h3 class="anchored"> <a name="going-local" href="#going-local">Going local</a> </h3> <p>Fortunately, we already had a local integration test set-up for running requests through Router 2.0 to a dyno. We typically utilize this set-up for verifying features and fixes, rarely for assessing performance. We subbed out our locally running “dyno” for a Puma server with a built-in 200ms lag on the <code>/fixed</code> endpoint. We then fired off 200 requests over 10 different connections with <a href="https://github.com/rakyll/hey">hey</a>:</p> <pre><code class="language-bash">❯ hey -q 200 -c 10 -host 'purple-local-staging.herokuapp.com' http://localhost:80/fixed Summary: Total: 8.5804 secs Slowest: 2.5706 secs Fastest: 0.2019 secs Average: 0.3582 secs Requests/sec: 23.3090 Total data: 600 bytes Size/request: 3 bytes Response time histogram: 0.202 [1] | 0.439 [185] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.676 [0] | 0.912 [0] | 1.149 [0] | 1.386 [0] | 1.623 [0] | 1.860 [0] | 2.097 [1] | 2.334 [6] |■ 2.571 [7] |■■ Latency distribution: 10% in 0.2029 secs 25% in 0.2038 secs 50% in 0.2046 secs 75% in 0.2086 secs 90% in 0.2388 secs 95% in 2.2764 secs 99% in 2.5351 secs Details (average, fastest, slowest): DNS+dialup: 0.0003 secs, 0.2019 secs, 2.5706 secs DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0034 secs req write: 0.0003 secs, 0.0000 secs, 0.0280 secs resp wait: 0.3570 secs, 0.2018 secs, 2.5705 secs resp read: 0.0002 secs, 0.0000 secs, 0.0175 secs Status code distribution: [200] 200 responses </code></pre> <p>As you can see, the 95th percentile of response times is over 2 seconds, just as we had seen while running this experiment on the platform. We were now starting to worry that the router itself was inflating the response times. We tried targeting Puma directly at <code>localhost:3000</code>, bypassing the router altogether:</p> <pre><code class="language-bash">❯ hey -q 200 -c 10 http://localhost:3000/fixed Summary: Total: 8.3314 secs Slowest: 2.4579 secs Fastest: 0.2010 secs Average: 0.3483 secs Requests/sec: 24.0055 Total data: 600 bytes Size/request: 3 bytes Response time histogram: 0.201 [1] | 0.427 [185] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.652 [0] | 0.878 [0] | 1.104 [0] | 1.329 [0] | 1.555 [0] | 1.781 [0] | 2.007 [0] | 2.232 [2] | 2.458 [12] |■■■ Latency distribution: 10% in 0.2017 secs 25% in 0.2019 secs 50% in 0.2021 secs 75% in 0.2026 secs 90% in 0.2042 secs 95% in 2.2377 secs 99% in 2.4433 secs Details (average, fastest, slowest): DNS+dialup: 0.0002 secs, 0.2010 secs, 2.4579 secs DNS-lookup: 0.0001 secs, 0.0000 secs, 0.0016 secs req write: 0.0001 secs, 0.0000 secs, 0.0012 secs resp wait: 0.3479 secs, 0.2010 secs, 2.4518 secs resp read: 0.0000 secs, 0.0000 secs, 0.0003 secs Status code distribution: [200] 200 responses </code></pre> <p>Wow! These results suggested the issue is reproducible with any ‘ole Go HTTP client and a Puma server. We next wanted to test out a different client. The load injection tool, <code>hey</code> is also written in Go, just like Router 2.0. We next tried <a href="https://httpd.apache.org/docs/current/programs/ab.html"><code>ab</code></a> which is written in C:</p> <pre><code class="language-bash">❯ ab -c 10 -n 200 http://127.0.0.1:3000/fixed This is ApacheBench, Version 2.3 &lt;$Revision: 1913912 $&gt; Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 100 requests Completed 200 requests Finished 200 requests Server Software: Server Hostname: 127.0.0.1 Server Port: 3000 Document Path: /fixed Document Length: 3 bytes Concurrency Level: 10 Time taken for tests: 8.538 seconds Complete requests: 200 Failed requests: 0 Total transferred: 35000 bytes HTML transferred: 600 bytes Requests per second: 23.42 [#/sec] (mean) Time per request: 426.911 [ms] (mean) Time per request: 42.691 [ms] (mean, across all concurrent requests) Transfer rate: 4.00 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 204 409 34.6 415 434 Waiting: 204 409 34.7 415 434 Total: 205 410 34.5 415 435 Percentage of the requests served within a certain time (ms) 50% 415 66% 416 75% 416 80% 417 90% 417 95% 418 98% 420 99% 429 100% 435 (longest request) </code></pre> <p>Another wow! The longest request took about 400 milliseconds, much lower than the 2 seconds above. Had we just stumbled upon some fundamental incompatibility between Go’s standard HTTP client and Puma? Not so fast.</p> <p>A deeper dive into the <code>ab</code> documentation surfaced this option:</p> <pre><code class="language-bash">❯ ab -h Usage: ab [options] [http[s]://]hostname[:port]/path Options are: ... -k Use HTTP KeepAlive feature </code></pre> <p>That’s different than <code>hey</code>’s default of enabling keepalive by default. Could that be significant? We re-ran <code>ab</code> with <code>-k</code>:</p> <pre><code class="language-bash">❯ ab -k -c 10 -n 200 http://127.0.0.1:3000/fixed This is ApacheBench, Version 2.3 &lt;$Revision: 1913912 $&gt; Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 100 requests Completed 200 requests Finished 200 requests Server Software: Server Hostname: 127.0.0.1 Server Port: 3000 Document Path: /fixed Document Length: 3 bytes Concurrency Level: 10 Time taken for tests: 8.564 seconds Complete requests: 200 Failed requests: 0 Keep-Alive requests: 184 Total transferred: 39416 bytes HTML transferred: 600 bytes Requests per second: 23.35 [#/sec] (mean) Time per request: 428.184 [ms] (mean) Time per request: 42.818 [ms] (mean, across all concurrent requests) Transfer rate: 4.49 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 6 Processing: 201 405 609.0 202 2453 Waiting: 201 405 609.0 202 2453 Total: 201 406 609.2 202 2453 Percentage of the requests served within a certain time (ms) 50% 202 66% 203 75% 203 80% 204 90% 2030 95% 2242 98% 2267 99% 2451 100% 2453 (longest request) </code></pre> <p>Now the output looked just like the <code>hey</code> output. Next, we ran <code>hey</code> with keepalives <em>disabled</em>:</p> <pre><code class="language-bash">❯ hey -disable-keepalive -q 200 -c 10 http://localhost:3000/fixed Summary: Total: 8.3588 secs Slowest: 0.4412 secs Fastest: 0.2091 secs Average: 0.4115 secs Requests/sec: 23.9269 Total data: 600 bytes Size/request: 3 bytes Response time histogram: 0.209 [1] | 0.232 [3] |■ 0.255 [1] | 0.279 [0] | 0.302 [0] | 0.325 [0] | 0.348 [0] | 0.372 [0] | 0.395 [0] | 0.418 [172] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.441 [23] |■■■■■ Latency distribution: 10% in 0.4140 secs 25% in 0.4152 secs 50% in 0.4160 secs 75% in 0.4171 secs 90% in 0.4181 secs 95% in 0.4187 secs 99% in 0.4344 secs Details (average, fastest, slowest): DNS+dialup: 0.0011 secs, 0.2091 secs, 0.4412 secs DNS-lookup: 0.0006 secs, 0.0003 secs, 0.0017 secs req write: 0.0001 secs, 0.0000 secs, 0.0011 secs resp wait: 0.4102 secs, 0.2035 secs, 0.4343 secs resp read: 0.0001 secs, 0.0000 secs, 0.0002 secs Status code distribution: [200] 200 responses </code></pre> <p>Again, no long tail response times and the median values comparable to the first run with <code>ab</code>. </p> <p>Even better, this neatly explained the performance difference between Router 2.0 and the legacy router. Router 2.0 adds support for HTTP keepalives by default, in line with <a href="https://datatracker.ietf.org/doc/html/rfc7230#appendix-A.1.2">HTTP/1.1 spec</a>. In contrast, the legacy router closes connections to dynos after each request. Keepalives usually improve performance, reducing time spent in TCP operations for both the router and the dyno. Yet, the opposite was true for a dyno running Puma.</p> <aside> <blockquote style="font-size: 26px;line-height: 1.5;text-align: center;padding: 10px 40px 40px;color: #697696; opacity: 0.75;font-style:italic; border:none;"> Router 2.0 adds support for HTTP keepalives by default, in line with HTTP/1.1 spec. In contrast, the legacy router closes connections to dynos after each request. Keepalives usually improve performance, the opposite was true for a dyno running Puma. </blockquote> </aside> <h3 class="anchored"> <a name="diving-deep-into-puma" href="#diving-deep-into-puma">Diving deep into Puma</a> </h3> <p><em>Note that we suggest reviewing this brief <a href="https://github.com/puma/puma/blob/master/docs/architecture.md">Puma architecture</a> document if you’re unfamiliar with the framework and want to get the most out of this section. To skip the code review, you may fast-forward to <a href="#the-solution">The Solution</a>.</em></p> <p>This finding was enough of a smoking gun to send us deep into the the Puma server code, where we homed in on the <a href="https://github.com/puma/puma/blob/7e17826da540019940a8e1a95fabe00883332d1a/lib/puma/server.rb#L438"><code>process_client</code></a> method. Let’s take a look at that code with a few details in mind:</p> <ol> <li>Each Puma thread can only handle a single connection at at time. A client is a wrapper around a connection.</li> <li>The <code>handle_request</code> method handles exactly 1 request. It returns <code>false</code> when the connection should be closed and <code>true</code> when it should be kept open. A client with keepalive enabled will end up in the <code>true</code> condition on line <code>470</code>.</li> <li> <code>fast_check</code> is only <code>false</code> once we’ve processed <code>@max_fast_inline</code> requests serially off the connection and when there are more connections waiting to be handled.</li> <li>For some reason, even when the number of connections exceeds the max number of threads, <code>@thread_pool.backlog &gt; 0</code> is often times false. </li> <li>Altogether, this means the below loop usually executes indefinitely until we’re able to bail out when <code>handle_request</code> returns <code>false</code>.</li> </ol> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732047455-process_client_2.png" alt="process_client_2"> <em>Code snippet from <a href="https://github.com/puma/puma/blob/7e17826da540019940a8e1a95fabe00883332d1a/lib/puma/server.rb#L438"><code>puma/lib/puma/server.rb</code></a> in Puma 6.4.2.</em></p> <p>When does <code>handle_request</code> actually return <code>false</code>? That is also based on a bunch of conditional logic, the core of it is in the <code>prepare_response</code> method. Basically, if <code>force_keep_alive</code> is <code>false</code>, <code>handle_request</code> will return <code>false</code>. (This is not exactly true. It’s more complicated, but that’s not important for this discussion.)</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732047607-prepare_resp_2.png" alt="prepare_resp_2"> <br> <em>Code snippet from <a href="https://github.com/puma/puma/blob/7e17826da540019940a8e1a95fabe00883332d1a/lib/puma/request.rb#L157"><code>puma/lib/puma/request.rb</code></a> in Puma 6.4.2.</em></p> <p>The last thing to put the puzzle together: <code>max_fast_inline</code> defaults to <code>10</code>. That means Puma will process at least 10 requests serially off a single connection before handing the connection back to the reactor class. Requests that may have come in a full second ago are just sitting in the queue, waiting for their turn. This directly explains our <code>10*200ms = 2 seconds</code> of added response time for our longest requests!</p> <p>We figured setting <code>max_fast_inline=1</code> might fix this issue, and it does <em>sometimes</em>. However, under sufficient load, even with this setting, response times will climb. The problem is the other two OR’ed conditions circled in <span color="blue">blue</span> and <span color="red">red</span> above. Sometimes the number of busy threads is less than the max and sometimes, there are no new connections to accept on the socket. However, these decisions are made at a point in time and the state of the server is constantly changing. They are subject to race conditions since other threads are concurrently accessing these variables and taking actions that modify their values.</p> <h3 class="anchored"> <a name="the-solution" href="#the-solution">The Solution</a> </h3> <p>After reviewing the Puma server code, we came to the conclusion that the simplest and safest way to bail out of processing requests serially would be to flat-out disable keepalives. Explicitly disabling keepalives in the Puma server means handing the client back to the reactor after each request. This is how we ensure requests are served in order.</p> <aside> <blockquote style="font-size: 26px;line-height: 1.5;text-align: center;padding: 10px 40px 40px;color: #697696; opacity: 0.75;font-style:italic; border:none;"> Explicitly disabling keepalives in the Puma server is how we ensure requests are served in order. </blockquote> </aside> <p>Once confirming these results with the Heroku Ruby language owners, we opened a <a href="https://github.com/puma/puma/issues/3487">Github issue</a> on the Puma project and a <a href="https://github.com/puma/puma/pull/3496">pull request</a> to add an <code>enable_keep_alives</code> option to the Puma DSL. When set to <code>false</code>, keepalives are completely disabled. The option will be released soon, likely in Puma 6.5.0.</p> <p>We then re-ran our load tests with <code>enable_keep_alives</code> disabled in Puma and Router 2.0 enabled on the app:</p> <pre><code class="language-bash">// config/puma.rb ... enable_keep_alives false </code></pre> <p>The response times and throughput improved, as expected. Additionally, once disabling Router 2.0, the response times stayed the same:</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732047844-good_latencies_both.png" alt="good_latencies_both"></p> <h2 class="anchored"> <a name="moving-forward" href="#moving-forward">Moving forward</a> </h2> <h3 class="anchored"> <a name="keeping-keepalives" href="#keeping-keepalives">Keeping keepalives</a> </h3> <p>Keeping connections alive reduces time spent in TCP operations. Under sufficient load and scale, avoiding this overhead cost can positively impact apps’ response times. Additionally, keepalives are the de facto standard in HTTP/1.1 and HTTP/2. Because of this, Heroku has chosen to move forward with keepalives as the default behavior for Router 2.0.</p> <p>Through raising <a href="https://github.com/puma/puma/issues/3487">this issue</a> on the Puma project, there has already been <a href="https://github.com/puma/puma/pull/3506">movement</a> to fix the bad keepalive behavior in the Puma server. Heroku engineers remain active participants in discussions arounds these efforts and are committed to solving this problem. Once a full fix is available, customers will be able to upgrade their Puma versions and use keepalives safely, without risk of long response times.</p> <h3 class="anchored"> <a name="disabling-keepalives-as-a-stopgap" href="#disabling-keepalives-as-a-stopgap">Disabling keepalives as a stopgap</a> </h3> <p>In the meantime, we have provided another option for disabling keepalives when using Router 2.0. The following <code>labs</code> flag may be used in conjunction with Router 2.0 to disable keepalives between the router and your web dynos:</p> <pre><code class="language-bash">heroku labs:enable http-disable-keepalive-to-dyno -a my-app </code></pre> <p>Note that this flag has no effect when using the legacy router as keepalives between the legacy router and dyno are not supported. For more information, see <a href="https://devcenter.heroku.com/articles/heroku-labs-disabling-keepalives-to-dyno-for-router-2-0">Heroku Labs: Disabling Keepalives to Dyno for Router 2.0</a>.</p> <h3 class="anchored"> <a name="other-options-for-puma" href="#other-options-for-puma">Other options for Puma</a> </h3> <p>You may find that your Puma app does not need keepalives disabled in order to perform well while using Router 2.0. We recommend testing and tuning other configuration options, so that your app can still benefit from persistent connections between the new router and your dyno:</p> <ul> <li> <strong>Increase the number of threads</strong>. More threads means Puma is better able to handle concurrent connections.</li> <li> <strong>Increase the number of workers</strong>. This is similar to increasing the number of threads.</li> <li> <strong>Decrease the <code>max_fast_inline</code> number</strong>. This will limit the number of requests served serially off a connection before handling queued requests.</li> </ul> <h3 class="anchored"> <a name="other-languages-amp-frameworks" href="#other-languages-amp-frameworks">Other languages &amp; frameworks</a> </h3> <p>Our team also wanted to see if this same issue would present in other languages or frameworks. We ran load tests, injecting 200 milliseconds of server-side lag over the top languages and frameworks on the Heroku platform. Here are those results.</p> <style> .timing-results-table {max-width: 100%;overflow-x: scroll;} .timing-results-table table {font-size:12px; white-space:nowrap;} .timing-results-table table thead tr {background-color:#cacaca;} .timing-results-table table thead tr th {color:#323839;} .timing-results-table table tbody tr {border-bottom:1px solid #e0e0e0;} .timing-results-table table tbody tr:nth-child(2) td:nth-child(1n+5) {background-color:#ffc8c9} .timing-results-table table tbody tr:nth-child(4), .timing-results-table table tbody tr:nth-child(5) {background-color:#f2f2f2} .timing-results-table table tbody tr:nth-child(8), .timing-results-table table tbody tr:nth-child(9) {background-color:#f2f2f2} .timing-results-table table tbody tr:nth-child(10) td:nth-child(1n+5) {background-color:#f2f2f2} .timing-results-table table tbody tr:nth-child(11) td:nth-child(1n+5) {background-color:#f2f2f2} .timing-results-table table tbody tr:nth-child(12), .timing-results-table table tbody tr:nth-child(13) {background-color:#f2f2f2} </style> <div class="timing-results-table"> <table> <thead> <tr> <th>Language/Framework</th> <th>Router</th> <th>Web dynos</th> <th>Server-side lag</th> <th>Throughput</th> <th>P50 Response Time</th> <th>P95 Response Time</th> <th>P99 Response Time</th> </tr> </thead> <tbody> <tr> <td><a href="https://github.com/heroku/ruby-getting-started">Puma</a></td> <td>Legacy</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>215 ms</td> <td>287 ms</td> <td>335 ms</td> </tr> <tr> <td> <a href="https://github.com/heroku/ruby-getting-started">Puma</a> with keepalives</td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>23 rps</td> <td>447 ms</td> <td>3,455 ms</td> <td>5,375 ms</td> </tr> <tr> <td> <a href="https://github.com/heroku/ruby-getting-started">Puma</a> without keepalives</td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>215 ms</td> <td>271 ms</td> <td>335 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/node-js-getting-started">NodeJS</a></td> <td>Legacy</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/node-js-getting-started">NodeJS</a></td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/python-getting-started">Python</a></td> <td>Legacy</td> <td>4 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>223 ms</td> <td>607 ms</td> <td>799 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/python-getting-started">Python</a></td> <td>Router 2.0</td> <td>4 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>223 ms</td> <td>607 ms</td> <td>735 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/php-getting-started">PHP</a></td> <td>Legacy</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>367 ms</td> <td>431 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/php-getting-started">PHP</a></td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>367 ms</td> <td>431 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/java-getting-started">Java</a></td> <td>Legacy</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/java-getting-started">Java</a></td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/go-getting-started">Go</a></td> <td>Legacy</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> <tr> <td><a href="https://github.com/heroku/go-getting-started">Go</a></td> <td>Router 2.0</td> <td>2 Standard-1X</td> <td>200 ms</td> <td>30 rps</td> <td>207 ms</td> <td>207 ms</td> <td>207 ms</td> </tr> </tbody> </table> </div> <p>These results indicate the issue is unique to Puma, with Router 2.0 performance comparable to the legacy router in other cases.</p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>We were initially surprised by this keepalive behavior in the Puma server. Funny enough, we believe Heroku’s significance in the Puma/Rails world and the fact that the legacy router does not support keepalives may have been factors in this bug persisting for so long. Reports of it had popped up in the past (see <a href="https://github.com/puma/puma/issues/3443">Issue 3443</a>, <a href="https://github.com/puma/puma/issues/2625">Issue 2625</a> and <a href="https://github.com/puma/puma/issues/2311">Issue 2331</a>), but none of these prompted a fool-proof fix. Setting <code>enable_keep_alives false</code> does completely eliminate the problem, but this is not the default option. Now, Puma maintainers are taking a closer look at the problem and <a href="https://github.com/puma/puma-exp/tree/00-long-tail-hey">benchmarking potential fixes</a> in a <a href="https://github.com/puma/puma-exp">fork</a> of the project. The intention is to fix the balancing of requests without closing TCP connections to the Puma server.</p> <p>Our Heroku team is thrilled that we were able to contribute in this way and help move the Puma/Rails community forward. We’re also excited to release Router 2.0 as GA, unlocking new features like <a href="https://devcenter.heroku.com/articles/http-routing#http-2-with-router-2-0">HTTP/2</a> and keepalives to your dynos. We encourage our users to try out this new router! For advice on how to go about that, see <a href="https://blog.heroku.com/tips-tricks-router-2dot0-migration">Tips &amp; Tricks for Migrating to Router 2.0</a>.</p> </description> <author>Elizabeth Cox</author> </item> <item> <title>Tips & Tricks for Migrating to Router 2.0</title> <link>https://blog.heroku.com/tips-tricks-router-2dot0-migration</link> <pubDate>Thu, 21 Nov 2024 18:13:00 GMT</pubDate> <guid>https://blog.heroku.com/tips-tricks-router-2dot0-migration</guid> <description><p>Heroku <a href="https://devcenter.heroku.com/changelog-items/3063">Router 2.0</a> is now generally available, marking a significant step forward in our infrastructure modernization efforts. The new router delivers enhanced performance and introduces new features to improve your applications’ functionality. There are, of course, nuances to be aware of with any new system, and with <a href="https://devcenter.heroku.com/articles/http-routing#legacy-router-and-router-2-0">Router 2.0</a> set to become the default router soon, we’d like to share some tips and tricks to ensure a smooth and seamless transition. </p> <h2 class="anchored"> <a name="start-with-a-staging-application" href="#start-with-a-staging-application">Start with a Staging Application</a> </h2> <p>We recommend exploring the new <a href="https://devcenter.heroku.com/articles/http-routing#legacy-router-and-router-2-0">router’s features</a> and validating your specific use cases in a controlled environment. If you haven’t already, <a href="https://devcenter.heroku.com/articles/multiple-environments#managing-staging-and-production-configurations">spin up a staging version</a> of your app that mirrors your production set-up as closely as possible. Heroku provides helpful tools, like <a href="https://devcenter.heroku.com/articles/pipelines">pipelines</a> and <a href="https://devcenter.heroku.com/articles/github-integration-review-apps">review apps</a>, for creating separate environments for your app. Once you have an application that you can test with, you can opt-in to Router 2.0 by running:</p> <pre><code class="language-bash">$ heroku features:enable http-routing-2-dot-0 -a &lt;staging app name&gt; </code></pre> <p>You may see a temporary rise in response times after migrating to the new router, due to the presence of connections on both routers. Using the Heroku CLI, run <code>heroku ps:restart</code> to restart all web dynos. You can also accomplish this using the Heroku Dashboard, see <a href="https://devcenter.heroku.com/articles/addressing-h12-errors-request-timeouts#restart-dynos">Restart Dynos</a> for details. This will force the closing of any connections from the legacy router. You can monitor your individual request response times via the <code>service</code> field in your application’s logs or see accumulated <a href="https://devcenter.heroku.com/articles/metrics#response-time">response time metrics</a> in the Heroku dashboard. </p> <h2 class="anchored"> <a name="how-to-determine-if-your-traffic-is-going-through-router-2-0" href="#how-to-determine-if-your-traffic-is-going-through-router-2-0">How to Determine if Your Traffic is Going Through Router 2.0</a> </h2> <p>Once your staging app is live and you have enabled the <code>http-routing-2-dot-0</code> Heroku Feature, you’ll want to confirm that traffic is actually being routed through Router 2.0. There are two easy ways to determine the router your app is using.</p> <h3 class="anchored"> <a name="http-headers" href="#http-headers">HTTP Headers</a> </h3> <p>You can identify which router your application is using by inspecting the HTTP Headers. The <code>Via</code> header, present in all HTTP responses from Heroku applications, is a code name for the Heroku router handling the request. Use the <code>curl</code> command to display the response headers of a request or your preferred browser’s developer tool.</p> <p>To see the headers using <code>curl</code>, run:</p> <pre><code class="language-bash">curl --head https://your-domain.com </code></pre> <p>In Router 2.0 the <code>Via</code> header value will be one of the following (depending on whether the protocol used is HTTP/2 or HTTP/1.1):</p> <pre><code class="language-bash">&lt; server: Heroku &lt; via: 2.0 heroku-router </code></pre> <pre><code class="language-bash">&lt; Server: Heroku &lt; Via: 1.1 heroku-router </code></pre> <p>The Heroku legacy router code name for comparison, is:</p> <pre><code class="language-bash">&lt; Server: Cowboy &lt; Via: 1.1 vegur </code></pre> <p>Note that per the HTTP/2 spec, <a href="https://datatracker.ietf.org/doc/html/rfc7540#section-8.1.2">RFC 7540 Section 8.1.2</a>, headers are converted to lowercase prior to their encoding in HTTP/2. </p> <p>To read more about Heroku Headers, see this <a href="https://devcenter.heroku.com/articles/http-routing#heroku-headers">article</a>.</p> <h3 class="anchored"> <a name="logs" href="#logs">Logs</a> </h3> <p>You will also see some subtle differences in your application’s system logs after migrating to Router 2.0. To fetch your app’s most recent system logs, use the <code>heroku logs --source heroku</code> command:</p> <pre><code class="language-bash">2024-10-03T08:20:09.580640+00:00 heroku[router]: at=info method=GET path="/" host=example-app-1234567890ab.heroku.com request_id=2eab2d12-0b0b-c951-8e08-1e88f44f096b fwd="204.204.204.204" dyno=web.1 connect=0ms service=0ms status=200 bytes=6742 protocol=http2.0 tls=true tls_version=tls1.3 </code></pre> <pre><code class="language-bash">2024-10-03T08:35:18.147192+00:00 heroku[router]: at=info method=GET path="/" host=example-app-1234567890ab.heroku.com request_id=edbea7f4-1c07-a533-93d3-99809b06a2be fwd="204.204.204.204" dyno=web.1 connect=0ms service=0ms status=200 bytes=6742 protocol=http1.1 tls=false </code></pre> <p>In this example, the output shows two log lines for requests sent to an app’s custom domain, handled by Router 2.0 over both HTTPS and HTTP protocols. You can compare these to the equivalent router log lines handled by the legacy routing system:</p> <pre><code class="language-bash">2024-10-03T08:22:25.126581+00:00 heroku[router]: at=info method=GET path="/" host=example-app-1234567890ab.heroku.com request_id=1b77c2d3-6542-4c7a-b3db-0170d8c652b6 fwd="204.204.204.204" dyno=web.1 connect=0ms service=1ms status=200 bytes=6911 protocol=https </code></pre> <pre><code class="language-bash">2024-10-03T08:33:49.139436+00:00 heroku[router]: at=info method=GET path="/" host=example-app-1234567890ab.heroku.com request_id=057d3a4b-2f16-4375-ba74-f6b168b2fe3d fwd="204.204.204.204" dyno=web.1 connect=1ms service=1ms status=200 bytes=6911 protocol=http </code></pre> <p>The <a href="https://devcenter.heroku.com/articles/http-routing#router-2-0-logs">key differences in the router logs</a> are:</p> <ul> <li>In Router 2.0, the protocol field will display values like <code>http2.0</code> or <code>http1.1</code>, unlike the legacy router which identifies the protocol with <code>https</code> or <code>http</code>.</li> <li>In Router 2.0, you will see new fields <code>tls</code> and <code>tls_version</code> (the latter will only be present if a request is sent over a TLS connection).</li> </ul> <p><a href="https://devcenter.heroku.com/articles/logging#view-logs">Here</a> are some alternative ways to view your application's logs. </p> <h2 class="anchored"> <a name="http-2-is-now-the-default" href="#http-2-is-now-the-default">HTTP/2 is Now the Default</a> </h2> <p>One of the most exciting changes in Router 2.0 is that HTTP/2 is now enabled by default. This new version of the protocol brings improvements in performance, especially for apps handling concurrent requests, as it allows multiplexing over a single connection and prioritizes resources efficiently. </p> <p>Here are some considerations when using <a href="https://devcenter.heroku.com/articles/http-routing#http-2-with-router-2-0">HTTP/2 on Router 2.0</a>:</p> <ul> <li>HTTP/2 terminates at the Heroku router and we forward HTTP/1.1 from the router to your app.</li> <li>Router 2.0 supports HTTP/2 on custom domains, but not on the built-in <code>&lt;app-name-cff7f1443a49&gt;.herokuapp.com&gt;</code> default domain.</li> <li>A valid TLS certificate is required for HTTP/2. We recommend using <a href="https://devcenter.heroku.com/articles/automated-certificate-management">Heroku Automated Certificate Management</a>.</li> </ul> <p>You can verify your app is receiving HTTP/2 requests by referencing the protocol value in your application’s logs or looking at the HTTP response headers for your request.</p> <p>That said, not all applications are ready for HTTP/2 out-of-the-box. If you notice any issues during testing or if the older protocol is simply more suitable for your needs, you can disable HTTP/2 in Router 2.0, reverting to HTTP/1.1. Run the following command:</p> <pre><code class="language-bash">heroku labs:enable http-disable-http2 -a &lt;app name&gt; </code></pre> <h2 class="anchored"> <a name="keepalives-always-on" href="#keepalives-always-on">Keepalives Always On</a> </h2> <p>Another key enhancement in Router 2.0 is the improved handling of keepalives, setting it apart from our legacy router. Router 2.0 enables <a href="https://devcenter.heroku.com/articles/http-routing#keepalives">keepalives</a> for all connections between itself and web dynos by default, unlike the legacy router which opens a new connection for every request to a web dyno and closes it upon receiving the response. Allowing keepalives can help optimize connection reuse and reduce the overhead of opening new TCP connections. This in turn lowers request latencies and allows higher throughput. </p> <p>Unfortunately, this optimization is not 100% compatible with every app. Specifically, recent Puma versions have a connection-handling bug that results in significantly longer tail request latencies if keepalives are enabled. Thanks to one of our customers, we learned this during the Router 2.0 beta period. For more details, see the <a href="https://blog.heroku.com/pumas-routers-keepalives-ohmy">blog post</a> on this topic. Their early adoption of our new router and timely feedback helped us pinpoint the issue and after extensive investigation, identify the problem with Puma and keepalives.</p> <p>Just like with HTTP/2 we realize one size does not fit all, thus we have introduced a new labs feature that allows you to opt-out of keepalives. To disable keepalives in Router 2.0, you can run the following command:</p> <pre><code class="language-bash">heroku labs:enable http-disable-keepalive-to-dyno -a &lt;app name&gt; </code></pre> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>Migrating to Router 2.0 represents a critical step in leveraging Heroku’s latest infrastructure improvements. The transition offers exciting new features like HTTP/2 support and enhanced connection handling. To facilitate a seamless transition we recommend you start testing the new router before we begin the Router 2.0 rollout to all customers in the coming months. By following these tips and confirming your app’s routing needs are met on Router 2.0, you will be well-prepared to take full advantage of the new router’s benefits. </p> <p>Stay tuned for more updates as we continue to improve Router 2.0’s capabilities and gather feedback from the developer community!</p> </description> <author>Agne Klimaite</author> </item> <item> <title>Planning Your PostgreSQL Migration: Best Practices and Key Considerations</title> <link>https://blog.heroku.com/planning-your-postgresql-migration</link> <pubDate>Tue, 19 Nov 2024 19:40:48 GMT</pubDate> <guid>https://blog.heroku.com/planning-your-postgresql-migration</guid> <description><p>Your organization may have many reasons to move a cloud service from one provider to another. Maybe you’ve found a better performance-versus-cost balance elsewhere. Maybe you’re trying to avoid vendor lock-in. Whatever your reasons, the convenience and general interoperability of cloud services today put you in the driver's seat. <em>You</em> get to piece together the tech stack and the cloud provider(s) that best align with your business.</p> <p>This includes where you turn for your <a href="https://www.heroku.com/postgres">PostgreSQL database</a>.</p> <p>If you’re considering migrating your Postgres database to a different cloud provider, such as Heroku, the process might seem daunting. You’re concerned about the risk of data loss or the impact of extended downtime. Are the benefits worth the effort and the risk?</p> <!-- more --> <p>With the right strategy and a solid plan in place, migrating your Postgres database is absolutely manageable. In this post, we’ll walk you through the key issues and best practices to ensure a successful Postgres migration. By the end of this guide, you’ll be well equipped to make the move that best serves your organization.</p> <h2 class="anchored"> <a name="pre-migration-assessment" href="#pre-migration-assessment">Pre-migration assessment</a> </h2> <p>Naturally, you need to know your starting point before you can plan your route to a destination. For a database migration, this means evaluating your current Postgres setup. Performing a pre-migration assessment will help you identify any potential challenges, setting you up for a smooth transition.</p> <p>Start by reviewing the core aspects of your database.</p> <h3 class="anchored"> <a name="database-version" href="#database-version">Database version</a> </h3> <p>Ensure the target cloud provider supports your current Postgres version. When you’re connected via the psql CLI client, the following commands will help you get your database version, with varying levels of detail:</p> <pre><code class="language-psql">psql=&gt; SELECT version(); PostgreSQL 12.19 on aarch64-unknown-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit psql=&gt; SHOW server_version; 12.19 </code></pre> <h3 class="anchored"> <a name="extensions" href="#extensions">Extensions</a> </h3> <p>Check for any Postgres extensions installed on your current database which are critical to your applications. Some extensions might not be available on your new platform, so be sure to verify this compatibility upfront.</p> <pre><code class="language-sql">psql=&gt; \dx List of installed extensions -[ RECORD 1 ]-------------------------------------------------------------- Name | fuzzystrmatch Version | 1.1 Schema | public Description | determine similarities and distance between strings -[ RECORD 2 ]-------------------------------------------------------------- Name | plpgsql Version | 1.0 Schema | pg_catalog Description | PL/pgSQL procedural language -[ RECORD 3 ]-------------------------------------------------------------- Name | postgis Version | 3.0.0 Schema | public Description | PostGIS geometry, geography, and raster spatial types and… </code></pre> <h3 class="anchored"> <a name="configurations" href="#configurations">Configurations</a> </h3> <p>Determine and document any custom configurations for your database instance. This may include memory settings, timeouts, and query optimizations. Depending on the infrastructure and performance capabilities of your destination cloud provider, you may need to adjust these configurations.</p> <p>You might be able to track down the files for your initial Postgres configuration (such as <code>pg_hba.conf</code> and <code>postgresql.conf</code>). However, in case you don’t have access to those files, or your configuration settings have changed, then you can capture all of your current settings into a file which you can review. Run the following command in your terminal:</p> <pre><code class="language-sql">$ psql \ # Include any connection and credentials flags -c "\copy (select * from pg_settings) to '/tmp/psql_settings.csv' with (format csv, header true);" </code></pre> <p>This will create a file at <code>/tmp/psql_settings.csv</code> with the full list of configurations you can review.</p> <h3 class="anchored"> <a name="schema-and-data-compatibility" href="#schema-and-data-compatibility">Schema and data compatibility</a> </h3> <p>Review the schema, data types, and indexes in your current database. Ensure they’re fully compatible with the Postgres version and configurations on the target cloud provider. The <a href="https://www.postgresql.org/about/featurematrix">feature matrix</a> in the Postgres documentation provides a quick reference to see what is or isn’t supported for any given version.</p> <h3 class="anchored"> <a name="performance-benchmark" href="#performance-benchmark">Performance benchmark</a> </h3> <p>Measure the current performance of your PostgreSQL database. When you establish performance benchmarks, you can compare pre- and post-migration metrics. This will help you (and any other migration stakeholders) understand how the new environment meets or exceeds your business requirements.</p> <p>When making your performance comparison, focus on key metrics like query performance, I/O throughput, and response times.</p> <h3 class="anchored"> <a name="identify-dependencies" href="#identify-dependencies">Identify dependencies</a> </h3> <p>Create a detailed catalog of the integrations, applications, and services that rely on your database. Your applications may use ORM tools, or you have microservices or APIs that query your database. Don’t forget about any third-party services that may access the database, too. You’ll need this comprehensive list when it’s time to cutover all connections to your new provider’s database. This will help you minimize disruptions and test all your connections.</p> <h2 class="anchored"> <a name="migration-strategy" href="#migration-strategy">Migration strategy</a> </h2> <p>When deciding on an actual database migration strategy, you have multiple options to choose from. The one you choose primarily depends on the size of your database and how much downtime you’re willing to endure. Let’s briefly highlight the main strategies.</p> <h3 class="anchored"> <a name="1-dump-and-restore" href="#1-dump-and-restore">#1: Dump and restore</a> </h3> <p>This method is the simplest and most straightforward. You create a full backup of your Postgres database using the <a href="https://www.postgresql.org/docs/current/app-pgdump.html"><code>pg_dump</code></a> utility. Then, you restore the backup on your target cloud provider using <a href="https://www.postgresql.org/docs/current/app-pgrestore.html"><code>pg_restore</code></a>. For most migrations, dump and restore is the preferred solution. However, keep in mind the following caveats:</p> <ul> <li>This is best suited for smaller databases. One recommendation from <a href="https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql-full-load-pd_dump.html">this AWS guide</a> is not to use this strategy if your database exceeds 100 GB in size. To determine the true size of your database, use the <a href="https://www.postgresql.org/docs/current/sql-vacuum.html"><code>VACUUM ANALYZE</code></a> commands in Postgres.</li> <li>This strategy requires some system downtime. It takes time to dump, transfer, restore, and test the data. Any database updates occurring during that time would be missed in the cutover, leaving your database out of sync. Plan for a generous amount of downtime — at least several hours — for this entire migration process.</li> </ul> <h3 class="anchored"> <a name="2-logical-replication" href="#2-logical-replication">#2: Logical replication</a> </h3> <p>Logical replication replicates changes from the source instance to the target. The source instance is set up to publish any changes, while the target instance listens for changes. As changes are made to the source database, they are replicated in real time on the destination database. Eventually, both databases become synchronized and stay that way until you’re ready to cutover.</p> <p>This approach allows you to migrate data with little to no downtime. However, the setup and management of replication may be complex. Also, certain updates, such as schema modifications are not published. This means you’ll need some manual intervention during the migration to carry over these changes.</p> <h3 class="anchored"> <a name="3-physical-replication" href="#3-physical-replication">#3: Physical replication</a> </h3> <p>Adopting a physical replication strategy means copying the actual block-level files that make up your database and then transferring them to the target database machine. This is a good option for when you need the consistency of an exact replication of data and system steps.</p> <p>For this strategy to work, your source and target Postgres versions must be identical. In addition, this approach introduces downtime that is similar to the dump and restore approach. So, unless you have a unique situation that requires such a high level of consistency, you may be better off with the dump and restore approach.</p> <h3 class="anchored"> <a name="4-managed-migration-tools" href="#4-managed-migration-tools">#4: Managed migration tools</a> </h3> <p>Finally, you might consider managed migration tools offered by some cloud providers. These tools automate and manage many aspects of the migration process, such as data transfer, replication, and minimization of downtime. These tools may be ideal if you’re looking to simplify the process while ensuring reliability.</p> <p>Migration tools are not necessarily a silver bullet. Depending on the size of your database and the duration of the migration process, you may incur high costs for the service. In addition, managed tools may have less customizability, requiring you to still do the manual work of migrating over extensions or configurations.</p> <h2 class="anchored"> <a name="data-transfer-and-security" href="#data-transfer-and-security">Data transfer and security</a> </h2> <p>When performing your migration, ensuring the secure and efficient transfer of data is essential. This means putting measures in place to protect your data integrity and confidentiality. Those measures include:</p> <ul> <li> <strong>Database backup</strong>: Before starting the migration, create a reliable backup of your database. Ensure the backup is encrypted, and store it securely. This backup will be your fail-safe, in case the migration does not go as planned. <strong>Even if your plan seems airtight and nothing could possibly go wrong… do not skip this step.</strong> Your future self will thank you.</li> <li> <strong>Data encryption</strong>: When transferring data between providers, use encryption to protect sensitive information from interception or tampering. Encrypt your data both at rest and in transit.</li> <li> <strong>Efficient transfer</strong>: Transferring large datasets can be network intensive, requiring a lot of bandwidth and time. However, you can make this process more efficient. Use compression techniques to reduce the size of the data to be transferred. For smaller databases, you might use a secure file transfer method such as SCP or SFTP. For larger ones, you might use a dedicated, high-throughput connection like AWS Direct Connect.</li> </ul> <h2 class="anchored"> <a name="network-and-availability-connections" href="#network-and-availability-connections">Network and availability connections</a> </h2> <p>Along with database configurations, you’ll need to set up the network with your new cloud provider to ensure smooth connectivity. This includes configuring VPCs, firewall rules, and establishing peering between environments. Ideally, completing and validating these steps before the data migration is important.</p> <p>To optimize performance, tune key connection settings like <code>max_connections</code>, <code>shared_buffers</code>, and <code>work_mem</code>. Start with the same settings as your source database. Then, after migration, adjust them based on your new infrastructure’s memory and network capabilities.</p> <p>Lastly, configure failover and high availability in the target environment, potentially setting up replication or clustering to maintain uptime and reliability.</p> <h2 class="anchored"> <a name="downtime-minimization-and-rollback-planning" href="#downtime-minimization-and-rollback-planning">Downtime minimization and rollback planning</a> </h2> <p>Minimizing downtime during a migration is crucial, especially for production databases. Your cutover strategy outlines the steps for switching from the source to target database with as little disruption as possible. Refer to the list you made when <a href="https://docs.google.com/document/d/176bSaaqV81UBnkKjF1ti8OBtpD5ezOgDpxduXQV2GPs/edit#heading=h.1xd9dtsak84k">identifying dependencies</a>, so you won’t overlook modifying the database connection for any application or service.</p> <p>How much downtime to plan for depends on the <a href="https://docs.google.com/document/d/176bSaaqV81UBnkKjF1ti8OBtpD5ezOgDpxduXQV2GPs/edit#heading=h.1berefjz2f1r">migration strategy</a> that you’ve chosen. Ensure that you’ve properly communicated with your teams and (if applicable) your end users, so that they can prepare for the database and all dependent services to be temporarily unavailable.</p> <p>And remember: Even with the best plans, things can go wrong. It’s essential to have a clear rollback strategy. This will likely include reverting to a database backup and restoring the original environment. Test your rollback plan in advance as thoroughly as possible. If the time comes to execute, you’ll need to be able to execute it quickly and confidently.</p> <h2 class="anchored"> <a name="testing-and-validation" href="#testing-and-validation">Testing and validation</a> </h2> <p>After the migration, but before you sound the all clear, you should test thoroughly to ensure everything functions as expected. Your tests should include:</p> <ul> <li> <strong>Data integrity checks</strong>, such as comparing row counts and using checksums to confirm that all data has transferred correctly and without corruption.</li> <li> <strong>Performance testing</strong> by running queries and monitoring key metrics, such as latency, throughput, and resource utilization. This will help you determine whether the new environment meets performance expectations or whether you’ll need to fine-tune certain settings.</li> <li> <strong>Application testing</strong> ensures any dependent services interact correctly with the new database. Test all your integrations to validate they perform seamlessly even with the new setup.</li> </ul> <h2 class="anchored"> <a name="post-migration-considerations" href="#post-migration-considerations">Post-migration considerations</a> </h2> <p>With your migration complete, you can breathe a sigh of relief. However, there’s still work to do. Close the loop by taking care of the following:</p> <ul> <li> <strong>Optimize your Postgres setup</strong> for the new environment. This includes fine-tuning performance settings like indexing or query plans.</li> <li> <strong>Implement database monitoring</strong>, with tools to track performance and errors. Robust monitoring tools will help you catch potential issues and maintain visibility into database health.</li> <li>Update your <strong>backup and disaster recovery</strong> strategies, ensuring that everything is properly configured according to your new provider’s options. Test and review your recovery plans regularly.</li> </ul> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>Migrating your Postgres database between cloud providers can be a complex process. However, with proper planning and preparation, it’s entirely possible to experience a smooth execution. </p> <p>By following the best practices and key steps above, you’ll be well on your way toward enjoying the benefits of leveraging Postgres from whatever cloud provider you choose.</p> <p>To recap quickly, here are the major points to keep in mind:</p> <ul> <li> <strong>Pre-migration assessment</strong>: Evaluate your current setup, check for compatibility at your target provider, and identify dependencies for a seamless cutover.</li> <li> <strong>Migration strategy</strong>: Choose the approach that fits your database size and tolerance for downtime. In most cases, this will be the dump and restore strategy.</li> <li> <strong>Data transfer and security</strong>: Ensure you have reliable backups securely stored, and that all your data—from backups to migration data—is encrypted at rest and in transit.</li> <li> <strong>Network and availability connections</strong>: Don’t forget to port over any custom configurations, at both the database level and the network level, to your new environment.</li> <li> <strong>Testing and validation</strong>: Before you can declare the migration as complete, you should perform tests to verify data integrity, performance, and application compatibility.</li> <li> <strong>Post-migration considerations</strong>: After you’re up and running with your new provider, optimize performance, implement monitoring, and update your disaster recovery strategies.</li> </ul> <p>Stay tuned for our upcoming guides, where we'll walk you through the specifics of migrating your Postgres database from various cloud providers to <a href="https://www.heroku.com/postgres">Heroku Postgres</a>.</p> </description> <author>Ken W. Alger</author> </item> <item> <title>Heroku Open Sources the Twelve-Factor App Definition</title> <link>https://blog.heroku.com/heroku-open-sources-twelve-factor-app-definition</link> <pubDate>Tue, 12 Nov 2024 17:15:00 GMT</pubDate> <guid>https://blog.heroku.com/heroku-open-sources-twelve-factor-app-definition</guid> <description><p>Today, we are excited to announce Twelve-Factor is now an open source project. This is a special moment in the journey of Twelve-Factor over the years. Published over a decade ago by Heroku co-founder Adam Wiggins to codify the <a href="https://en.wikipedia.org/wiki/Twelve-Factor_App_methodology">best practices for writing SaaS apps</a>, the ideas espoused on that website inspired many generations of software engineers.</p> <blockquote class="pullquote-sm" style="padding: 10px 30px 10px 30px;font-size: inherit;border-left: 5px solid #EEF1F6; margin-bottom: 30px;"> <p> Open sourcing 12-Factor is an important milestone to take the industry forward and codify best practices for the future. As the modern app architecture reflected in the 12-Factors became mainstream, new technologies and ideas emerged, and we needed to bring more voices and experiences to the discussion.</p> <cite> <span class="quote-author">Vish Abrams</span> <span class="quote-author-meta">Chief Architect, Heroku by Salesforce</span> </cite> </blockquote> <p>We’re open sourcing Twelve-Factor because the principles were always meant to serve the broader software community, not just one company. Over time, SaaS went from a growing area of software delivery to the dominant distribution method for software and IaaS has overtaken data centers for infrastructure. The cloud is now the default. </p> <p>At the same time the technology landscape changed. Containers and Kubernetes have done to the application layer what virtual machines did to servers and have spawned huge ecosystems and communities of their own focused on a new layer of app and infrastructure abstraction. </p> <p>With these in mind, we looked at how to drive Twelve-Factor forward; to be even more relevant in the decades to come. Collectively we in the industry, end users and vendors, have learned so much from running apps and systems at scale over the past decade and it’s this collective knowledge that we need to codify to help the next wave of app teams be successful, faster, more easily. This movement is bigger than one company and to open it to an industry conversation, we are open sourcing it. </p> <blockquote class="pullquote-sm" style="padding: 10px 30px 10px 30px;font-size: inherit;border-left: 5px solid #EEF1F6; margin-bottom: 30px;"> <p>When I wrote Twelve Factor nearly 14 years ago, I never would have guessed these principles would remain relevant for so long, but cloud and backends have changed a lot since 2011! So it makes sense to turn Twelve-Factor into a community-maintained document that can evolve over time.</p> <cite> <span class="quote-author">Adam Wiggins</span> <span class="quote-author-meta">Heroku Founder, now GM of Platform at The Browser Company</span> </cite> </blockquote> <p>What does this mean for Heroku? We will continue to support Twelve-Factor as part of the community. The Heroku platform has always been an implementation of the Twelve-Factors to make the act of building and deploying apps easier, and this will continue to be the case as the Twelve-Factors evolves; Heroku will evolve. </p> <p>We invite you to get to know the <a href="https://github.com/twelve-factor/twelve-factor/blob/main/VISION.md">project vision</a>, meet the <a href="https://github.com/twelve-factor/twelve-factor/blob/main/MAINTAINERS.md">maintainers</a>, and <a href="https://github.com/twelve-factor/twelve-factor/blob/main/CONTRIBUTING.md">participate</a> in the project. Read more about the project and community on the <a href="https://12factor.net/blog/open-source-announcement">Twelve-Factor blog</a>.</p> </description> <author>Betty Junod</author> </item> <item> <title>Building Supercharged Agents with Heroku and Agentforce</title> <link>https://blog.heroku.com/building-supercharged-agents-heroku-agentforce</link> <pubDate>Thu, 17 Oct 2024 18:15:00 GMT</pubDate> <guid>https://blog.heroku.com/building-supercharged-agents-heroku-agentforce</guid> <description><p>Heroku is a powerful general-purpose PaaS offering, but when combined with the broader Salesforce portfolio, it excels in unlocking and unifying customer data, regardless of its age, location, size, or structure. One of the key reasons why Salesforce customers turn to Heroku is when they require such data to be securely linked to high-scale experiences, such as consumer web or mobile apps, or when they need scalable compute resources to access and analyze more intricate and complex data in real time. In this blog, we’ll explore how to supercharge <a href="https://www.salesforce.com/agentforce/">Agentforce</a> by leveraging one of the ways in which the Heroku platform is used to transform data from diverse sources, offering comprehensive, real-time information that keeps employees in the flow of work.</p> <!-- more --> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729003747-1-heroku-and-agentforce.png" alt="Supercharge Agentforce with Heroku"></p> <p>Salesforce recently launched a new AI-driven technology, Agentforce, along with an array of prebuilt agents tailored to each role within Customer 360, from service to sales and various industries. Agentforce relies on discrete actions described to the AI engine, allowing it to interpret user questions and execute one or more actions (effectively coded functions) to deliver an answer. </p> <p>However, some use cases require actions that are more customized to a specific business or workflow. In these situations, custom actions can be built using both code and low-code solutions, enabling developers to extend the range of actions available to Agentforce. Developers can utilize Apex or Flow, and if the necessary data resides within Salesforce, and the complexity and computational needs are minimal, both options are worth exploring first. However, if this is not the case, a Heroku custom action written in languages other than Apex can be added to Agentforce agents, as will be demonstrated in this blog post.</p> <h2 class="anchored"> <a name="introducing-ultraconstruction-an-agentforce-user" href="#introducing-ultraconstruction-an-agentforce-user">Introducing UltraConstruction, an Agentforce User</a> </h2> <p>Let's take a look at a use case first. UltraConstruction, a 60-year-old company, uses Salesforce Sales and Service Cloud agents to handle customer inquiries. However, their older, unstructured invoices are stored in cloud archives, creating access challenges for their AI agents and leading to delays and customer frustration.</p> <div style="max-width:500px; margin-bottom:30px;"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729003814-2-agentforce-access-challenges.png" alt="Data access challenges with Agentforce"> </div> <p>UltraConstruction’s Agentforce builders and developers have discovered that older invoice information is stored in cloud file archives in various unstructured formats, such as Microsoft Word, PDFs, and images. UltraConstruction does not need this information imported but requires it to be accessible by their agents.</p> <div style="max-width:250px; margin-bottom:30px;"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729003842-3-archived-invoices.png" alt="archived invoices"> </div> <p>UltraConstruction’s developers know that Java has a rich ecosystem of libraries to handle such formats, and that Heroku offers the vertical scalability needed to process and analyze the extracted data in real time. With the additional help of AI, they can make the action more flexible in terms of the queries it can handle—so they get coding! The custom Agentforce action they develop on Heroku accesses information without moving that data, and answers not only the above query but practically any other query that sales or service employees might encounter.</p> <div style="max-width:500px; margin-bottom:30px;"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729003873-4-agentforce-heroku-access.png" alt="Heroku helps agentforce access information"> </div> <h2 class="anchored"> <a name="an-agentforce-and-heroku-integration-blueprint" href="#an-agentforce-and-heroku-integration-blueprint">An Agentforce and Heroku Integration Blueprint</a> </h2> <p>UltraConstruction’s use case can occur regardless of the type, age, location, size, or structure of the data. Even for data already residing in Salesforce, more intensive computational tasks such as analytics, transformations, or ad-hoc queries are possible using Heroku and its array of languages and elastic compute managed services. Before we dive into the UltraConstruction Agentforce action, let's review the overall approach to using Heroku with Agentforce.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729004022-5-agentforce-heroku-blueprint.png" alt="Using Heroku with Agentforce - a blueprint"></p> <p>On the far right of the diagram above, we can see customer data depicted in various shapes, sizes, and locations, all of which can be accessed by Heroku-managed code on behalf of the agent. In the top half of the diagram, Agentforce manages which actions to use. Heroku-powered actions are exposed via <a href="https://help.salesforce.com/s/articleView?id=sf.external_services.htm&amp;type=5">External Services</a> and later imported as an Agent Action via Agent Builder. </p> <p>In the bottom half of the diagram, since External Services are used, the only requirement for the Heroku app is to support the <a href="https://www.openapis.org/">OpenAPI</a> standard to describe the app's API inputs and outputs, specifically the request and response of the action. Finally, keep in mind that Heroku applications can call out to other services, leverage <a href="https://elements.heroku.com/">Heroku add-ons</a>, and utilize many industry programming languages with libraries that significantly speed up the development process.</p> <h2 class="anchored"> <a name="a-sample-agentforce-heroku-action" href="#a-sample-agentforce-heroku-action">A Sample Agentforce Heroku Action</a> </h2> <p>Now that you know the use case and the general approach, in the following video and GitHub repository <a href="https://github.com/heroku-reference-apps/agentforce-archive-agent?tab=readme-ov-file#archive-agent-for-use-with-agentforce">README</a> file, you will be able to try this out for yourself! The action has been built to simulate the scenario that UltraConstruction found themselves in, with some aspects simplified to make the sample easier to understand and deploy. The following diagram highlights how the above blueprint was taken and expanded upon to build the required action.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1729004064-6-agentforce-heroku-action.png" alt="Heroku and Agentforce action diagram"></p> <p>The primary changes to note are:</p> <ul> <li> <strong>Java, along with <a href="https://spring.io/projects/spring-boot">Spring Boot</a></strong><br> The Spring framework offers a wide range of tools that make managing data, security, and calling AI LLMs (Large Language Models) very simple with minimal code. It supports both web and API-based applications.</li> <li> <strong><a href="https://www.h2database.com/html/main.html">H2</a> is a highly optimized in-memory database</strong><br> Stores data from processed invoice documents in a relational form, ready for querying.</li> <li> <strong><a href="https://springdoc.org/">springdoc.org</a> is used to generate an OpenAPI schema</strong><br> Java is a strongly typed language, making it an excellent choice for building and defining APIs. This library requires minimal configuration for compliant OpenAPI APIs, which are required by External Services.</li> <li> <strong><a href="https://spring.io/projects/spring-ai">Spring AI</a> has been used to simplify access to industry LLMs</strong><br> Spring AI is easy to configure and often requires minimal coding—sometimes just one line of code—to tap into powerful LLMs, such as those provided by OpenAI and others. In this case, it is responsible for taking the natural language query entered into the Agentforce agent and converting it into SQL, which is run against the H2 database. The result of this query is then returned to Agentforce and integrated into a natural language response for the user.</li> </ul> <p>If you're interested in viewing the code and a demonstration, you can watch the video below. When you're ready to deploy it yourself, review the deployment steps in the <a href="https://github.com/heroku-reference-apps/agentforce-archive-agent?tab=readme-ov-file#archive-agent-for-use-with-agentforce">README</a>.</p> <div class="embedded-video-wrapper"> <iframe width="560" height="315" src="https://www.youtube.com/embed/mNgrdf1GX-w?si=JbPh-Ufb63z8HoMo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> </div> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>Code is a powerful tool for integration, but keep in mind that Heroku also provides out-of-the-box integrations that bring Salesforce data closer to your application through <a href="https://www.heroku.com/postgres">Heroku Postgres</a> and our <a href="https://www.heroku.com/connect">Heroku Connect</a> product. We also support <a href="https://blog.heroku.com/introducing-the-heroku-postgres-connector-for-salesforce-data-cloud">integrations with Data Cloud</a>. Heroku also offers <a href="https://devcenter.heroku.com/articles/pgvector-heroku-postgres">pgvector</a> as an extension to its managed Postgres offering, providing a world class vector database to support your retrieval augmented generation and semantic search needs. You can see it in action <a href="https://github.com/heroku-reference-apps/ask-pdf">here</a>. While this blog's customer scenario didn’t require these capabilities, other agent use cases may well benefit from these features, further boosting your agent actions! Last but not least, we at Heroku consider feedback a gift, so if you have broader ideas or feedback, please connect with us via the <a href="https://github.com/heroku/roadmap">Heroku GitHub roadmap</a>.</p> <h2 class="anchored"> <a name="updates" href="#updates">Updates</a> </h2> <p>Since publishing this blog, we have released additional content we wanted to share. </p> <p>This step by step <a href="https://github.com/heroku-examples/heroku-agentforce-tutorial?tab=readme-ov-file#creating-agentforce-custom-actions-with-heroku">tutorial</a>, available in Java and Python, will guide you through configuring an Agentforce Action deployed on Heroku within your Salesforce org. By the end, you will be able to ask Agentforce to generate your own badge, as shown below!</p> <div style="max-width:500px; margin-bottom:30px;"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1732029927-agentforce-heroku-tutorial.png" alt="Heroku Agentforce Tutorial"> </div> <p>An additional demonstration <a href="https://www.youtube.com/watch?v=yd97A9GLFUA&amp;t=2s">video</a> and <a href="https://github.com/heroku-examples/agentforce-collage-agent">sample code</a>, diving deeper into how Heroku enhances Agentforce agents' capabilities. In this expanded version of the popular <a href="https://developer.salesforce.com/sample-apps">Coral Cloud Resort demo</a>, vacationing guests can use Agentforce to browse and book unique experiences. With Heroku, the agent can even generate personalized adventure collages for each guest, showcasing how custom code on Heroku enables dynamic digital media creation directly within the Agentforce platform.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1731654192-coral-cloud-resorts.png" alt="Coral Cloud Resorts"></p> </description> <author>Andrew Fawcett</author> </item> <item> <title>Best Practices for Optimizing Your Enterprise Cloud Applications with New Relic</title> <link>https://blog.heroku.com/best-practices-optimizing-enterprise-cloud-applications-new-relic</link> <pubDate>Tue, 08 Oct 2024 19:13:37 GMT</pubDate> <guid>https://blog.heroku.com/best-practices-optimizing-enterprise-cloud-applications-new-relic</guid> <description><p>If your cloud application performs poorly or is unreliable, users will walk away, and your enterprise will suffer. To know what’s going on inside of your million-concurrent-user application (Don’t worry, you’ll get there!), you need observability. Observability gives you the insights you need to understand how your application behaves. As your application and architecture scale up, effective observability becomes increasingly indispensable.</p> <p>Heroku gives you more than just a flexible and developer-friendly platform to run your cloud applications. You also get access to a suite of built-in observability features. Heroku's core application metrics, alerts, and language-specific runtime metrics offer a comprehensive view of your application’s performance across the entirety of your stack. With these features, you can monitor and respond to issues with speed.</p> <!-- more --> <p>In this article, we’ll look at these key observability features from Heroku. For specific use cases with more complexity, your enterprise might lean on supplemental features and more granular data from the <a href="https://elements.heroku.com/addons/newrelic">New Relic add-on</a>. We’ll explore those possibilities as well.</p> <p>At the end of the day, robust observability is a must-have for your enterprise cloud applications. Let’s dive into how Heroku gives you what you need.</p> <h2 class="anchored"> <a name="application-metrics" href="#application-metrics">Application Metrics</a> </h2> <p>Heroku provides several <a href="https://devcenter.heroku.com/articles/metrics">application-level metrics</a> to help you investigate issues and perform effective root cause analysis. For web dynos (isolated, virtualized containers), Heroku gives you easy access to <a href="https://devcenter.heroku.com/articles/metrics#metrics-gathered-for-web-dynos-only">response time and throughput</a> metrics.</p> <ul> <li>Response time metrics include the median, 95th percentile, and 99th percentile times, offering a clear picture of how quickly the application responds under typical and extreme conditions.</li> <li>Throughput metrics are broken down by HTTP status codes, helping you identify traffic patterns and pinpoint areas where requests may be failing.</li> </ul> <p>Across all dynos types (except eco), Heroku gathers <a href="https://devcenter.heroku.com/articles/metrics#metrics-gathered-for-web-dynos-only">memory usage and dyno load</a> metrics. </p> <ul> <li>Memory usage metrics include data on total memory, RSS (resident set size), and swap usage. These are vital for understanding how efficiently your application uses memory and whether it’s at risk of exceeding memory quotas and triggering errors.</li> <li>Dyno load measures the load on the container’s CPU, providing a view into how many processes are competing for time — a signal of whether your application is overburdened or not.</li> </ul> <p>These metrics are crucial for root cause analysis. As you examine trends and spikes in these metrics, you can identify bottlenecks and inefficiencies, preemptively addressing potential failures before they escalate. Whether you’re seeing a surge of slow response times or an anomalous increase in memory usage, these metrics guide developers in tracing the problem back to its source. Equipped with these metrics, your enterprise can ensure faster and more effective issue resolution.</p> <style> .post-body img { border:1px solid rgba(0,0,0,0.1); } </style> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727713544-1-application-metrics.png" alt="1-application-metrics"></p> <h2 class="anchored"> <a name="threshold-alerting" href="#threshold-alerting">Threshold Alerting</a> </h2> <p><a href="https://devcenter.heroku.com/articles/metrics#threshold-alerting">Threshold alerting</a> allows you to set specific thresholds for critical application metrics. When your application exceeds these thresholds, alerts are automatically triggered, and you’re notified of potential issues before they escalate into major problems. With alerts, you can take a proactive approach to maintaining application performance and reliability.</p> <p>This is particularly useful for keeping an eye on response time, memory usage, and CPU load. By setting appropriate thresholds, you ensure that your application operates within its optimal parameters to prevent resource exhaustion and maintain performance.</p> <p>Threshold alerting is available exclusively for Heroku’s professional-tier dynos (<code>Standard-1X</code>, <code>Standard-2X</code>, and all <code>Performance</code> dynos).</p> <div style="max-width:250px; margin:0 auto 40px;"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727713576-2-threshold-alerting.png" alt="threshold alerting"> </div> <h2 class="anchored"> <a name="language-runtime-metrics" href="#language-runtime-metrics">Language Runtime Metrics</a> </h2> <p>Heroku provides detailed insights into memory usage by offering language-specific runtime metrics for applications running on JVM, Go, Node.js, or Ruby. Metrics include:</p> <ul> <li> <strong>JVM applications</strong>: Heap memory usage and garbage collection times.</li> <li> <strong>Go applications</strong>: Memory stack, coroutines, and garbage collection statistics.</li> <li> <strong>Node.js and Ruby applications</strong>: Heap and non-heap memory usage breakdowns.</li> </ul> <p>These insights are crucial for developers in identifying memory leaks, optimizing performance, and ensuring efficient resource utilization. Understanding how memory is consumed allows developers to fine-tune their applications and avoid memory-related crashes. By tapping into these metrics, you can maintain smoother, more reliable performance.</p> <p>These metrics are available on all dynos (except eco), using the supported languages.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727713586-3-language-runtime-metrics.png" alt="3-language-runtime-metrics"></p> <p>To utilize these features, first enable them in your Heroku account. Then, import the appropriate library within your applications’ build and redeploy. </p> <h2 class="anchored"> <a name="heroku-and-new-relic-for-the-win" href="#heroku-and-new-relic-for-the-win">Heroku and New Relic for the Win</a> </h2> <p>In most cases, the above observability features give you enough information to troubleshoot and optimize your cloud applications. However, in more complex situations, you may want an additional boost through a dedicated application performance monitoring (APM) solution such as New Relic. Heroku offers the <a href="https://elements.heroku.com/addons/newrelic">New Relic APM add-on</a>, which lets you track detailed performance metrics, monitor application health, and diagnose issues with real-time data and insights.</p> <p>Key features from New Relic include:</p> <ul> <li> <strong>Code-level diagnostics</strong>: Allows developers to identify problematic areas in their code that may be causing performance bottlenecks. This helps in optimizing the application and ensuring lower latency user experiences.</li> <li> <strong>Transaction tracing</strong>: Provides visibility into the life cycle of each transaction within the application. Trace requests from start to finish, pinpointing delays or errors that may occur during specific processes.</li> <li> <strong>Customizable instrumentation</strong>: Enables developers to tailor the monitoring and data collection to their specific needs, providing more granular insights and control over application performance</li> </ul> <p>Features such as these enable more effective troubleshooting and optimization, helping you ensure that your applications run efficiently even under heavy load.</p> <p>The New Relic APM add-on integrates seamlessly with your application, automatically capturing detailed performance data. With the add-on installed, you can:</p> <ul> <li>Regularly review transaction traces to identify slow-performing transactions.</li> <li>Use error analytics to monitor and address issues in real time.</li> <li>Leverage detailed diagnostics to continuously improve the application's performance.</li> </ul> <p>Connecting your application to New Relic agents is straightforward. You simply install a New Relic library in your codebase and redeploy. The APM solution’s advanced features also allow for more fine-grained control of the data you’re sending. In addition to monitoring application state and metrics, you can also use it to monitor logs and infrastructure.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727713613-4-new-relic-heroku-2.png" alt="4-new-relic-heroku-2"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727713612-4-new-relic-heroku-1.png" alt="4-new-relic-heroku-1"></p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>In this blog, we’ve explored the advanced observability features from Heroku along with the additional power offered by the New Relic APM add-on. Heroku’s observability features alone provide the metrics and alerting capabilities that can go a long way toward safeguarding your deployments and customers’ experience. New Relic further enhances observability with its APM capabilities, such as code-level diagnostics and transaction tracing.</p> <p>Staying proactive with cloud application observability is key to maintaining enterprise application efficiency. Robust observability helps you ensure that your applications are running smoothly, and it also enables you to handle unexpected challenges. With a strong observability solution, you gain insights that help you sustain application performance and deliver a superior user experience.</p> <p>To learn more about enterprise observability, read more about the features <a href="https://www.heroku.com/enterprise">Heroku Enterprise</a> has to offer, or <a href="https://www.heroku.com/enterprise#contact">contact us</a> to help you get started.</p> </description> <author>Julián Duque</author> </item> <item> <title>Electron on Heroku</title> <link>https://blog.heroku.com/electron-on-heroku</link> <pubDate>Mon, 30 Sep 2024 11:00:00 GMT</pubDate> <guid>https://blog.heroku.com/electron-on-heroku</guid> <description><p>As maintainers of the <a href="https://www.electronjs.org/">open source framework Electron</a>, we try to be diligent about the work we take on. Apps like Visual Studio Code, Slack, Notion, or 1Password are <a href="https://www.electronjs.org/apps">built on top of Electron</a> and make use of our unique mix of native code and web technologies to make their users happy. That requires focus: There’s always more work to be done than we have time and resources for. In practice, that means that we don’t want to spend time thinking about the server infrastructure for the project — and we’re grateful for the support we receive from Heroku, where we can host load-intensive apps without worrying about managing the underlying infrastructure. In this blog post, we’ll take a look at some of the ways in which we use Heroku.</p> <!-- more --> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726768343-1-electron-on-heroku.png" alt="1-electron-on-heroku"></p> <h2 class="anchored"> <a name="a-sip-from-the-fire-hose-electron-s-update-service" href="#a-sip-from-the-fire-hose-electron-s-update-service">A sip from the fire hose: Electron’s update service</a> </h2> <p>Updating desktop software is tricky: Unlike websites, which you can update simply by pushing new code to your server — or mobile apps, which you can update using the app stores, desktop apps usually need to update themselves. This process requires a cloud service that serves information about the latest versions as well as the actual binaries themselves.</p> <p>To make that easier, Electron offers a free update service powered by Heroku and GitHub Releases. You can add it to your app by visiting <a href="http://update.electronjs.org">update.electronjs.org</a>. The underlying Heroku service is a <a href="https://github.com/electron/update.electronjs.org/blob/main/src/updates.js">humble little Node.js app</a>, hosted inside a single web Dyno, but serves consistently more than 100 requests per second in less than 1ms response time, using less than 100MB of memory. In other words, we’re serving at peak almost half a million requests per hour with nothing but the default <a href="https://github.com/electron/update.electronjs.org/blob/main/Procfile">Procfile</a>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726768353-2-electron-heroku-redis.png" alt="2-electron-heroku-redis"></p> <p>We’re using a simple staging/production pipeline and <a href="https://www.heroku.com/redis">Heroku Data for Redis</a> as a lightweight data store. In other words, we’re benefiting from sensible defaults — the fact that Heroku doesn’t have us setup or manage keeping this service online means that we didn’t really have to look at it in 2024. It works, allowing us to focus on the things that don’t.</p> <h2 class="anchored"> <a name="making-slack-a-little-better-for-us" href="#making-slack-a-little-better-for-us">Making Slack a little better for us</a> </h2> <p>Like most open source projects, Electron needs to be constantly mindful of its most limited resource: The time of its maintainers. To make our work easier, we’re making heavy use of bots and automation wherever possible. Those bots run on Heroku, since we ideally want to set them up and never think about them again.</p> <p>Take the <a href="https://github.com/electron/slack-chromium-helper">slack-chromium-helper</a> as an example: If you send a URL to a Chromium Developer Resource in Slack, this bot will fetch the content of that resource and automatically unfurl it.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726768371-3-slack-chromium-helper.png" alt="3-slack-chromium-helper"></p> <p>To build this bot, we used Slack’s own <code>@slack/bolt</code> framework. On the Heroku side, no custom configuration is necessary: We’re using a basic web dyno, which automatically runs <code>npm install</code>, <code>npm build</code>, and <code>npm start</code>. The attached data store is <a href="https://www.heroku.com/postgres">Heroku Postgres</a> on the “<a href="https://elements.heroku.com/addons/heroku-postgresql#pricing">essential</a>” plan. In other words, we’re getting a persistent, fully-managed data store for cents.</p> <p>Here too, the main feature of Heroku to us is that it “just works”: We can use the tools we’re familiar with, write an automation that saves us time when working in Slack, and don’t have to worry about long-term maintenance. We’re thankful that we never have to think about upgrading a server operating system.</p> <h2 class="anchored"> <a name="github-automated" href="#github-automated">GitHub, Automated</a> </h2> <p>Many PRs opened against <code>electron/electron</code> are actually made by our bots — the most important one being electron/roller, which automatically attempts to update our major dependencies, Node.js and Chromium. So far, our bot has opened more than 400 PRs — <a href="https://github.com/electron/electron/pull/42615">like this one</a>, bumping our Node.js version to v20.15, updating the release notes, and adding labels to power subsequent automation.</p> <p>The bot is, once again, powered by a Node.js app running on a Heroku web dyno. It uses the popular GitHub <a href="https://github.com/probot/probot">Probot framework</a> to automatically respond to closed <a href="https://github.com/electron/roller/blob/main/src/index.ts">pull requests and new issues comments</a>. To make sure that it automatically attempts to perform updates, we’re using <a href="https://devcenter.heroku.com/articles/scheduler">Heroku Scheduler</a>, which calls scripts on our app daily.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726768381-4-heroku-scheduler-electron.png" alt="4-heroku-scheduler-electron"></p> <h2 class="anchored"> <a name="platform-as-a-service" href="#platform-as-a-service">Platform as a Service</a> </h2> <p>If you’d ask the Electron maintainers about Heroku, we’d tell you that we don’t think about it that much. We organize our work by focusing on the features that need to be built the most, the bugs that need to be fixed first, and the tooling changes we need to make to make the lives of Electron app developers as easy as possible.</p> <p>For us, Heroku just works. We can quickly spin up web services, bots, and automations using the tools we like the most — in our case, Node.js apps, developed on GitHub, paired with straightforward data stores. Thanks to easy SSO integration, the entire group has the access they need without giving anyone too much power.</p> <p>That is what we like the most about Heroku: How it works. We like it as much as we like electricity coming out of our sockets: Essential to the work that we do, yet never a headache or a problem that needs to be solved.</p> <p>We’d like to thank Heroku and Salesforce for being such strong supporters of open source technologies, their contributions to the ecosystem, and in the case of Electron, their direct contribution towards delightful desktop software.</p> </description> <author>Felix Rieseberg</author> </item> <item> <title>Simplify Your Cloud Security: Heroku ACM Now Supports Wildcard Domains</title> <link>https://blog.heroku.com/heroku-acm-now-supports-wildcard-domains</link> <pubDate>Thu, 26 Sep 2024 22:09:00 GMT</pubDate> <guid>https://blog.heroku.com/heroku-acm-now-supports-wildcard-domains</guid> <description><p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1727193445-Wildcard%20Support%20for%20Heroku%20ACM_Blog%20Image_Option%201.png" alt="Wildcard Support for Heroku ACM_Blog Image_Option 1"></p> <p>We are thrilled to announce that <a href="https://devcenter.heroku.com/articles/automated-certificate-management">Heroku Automated Certificate Management (ACM)</a> now supports wildcard domains for the Common Runtime!</p> <p>Heroku ACM’s support for wildcard domains streamlines your cloud management by allowing Heroku’s Certificate management to cover all your desired subdomains with only one command, reducing networking setup overhead and providing more flexibility while enhancing the overall security of your applications.</p> <p>This highly-requested feature request is here, and in this blog post, we'll dive into what wildcard domains are, why you should use them, and the new possibilities this support brings to Heroku ACM.</p> <!-- more --> <h2 class="anchored"> <a name="what-s-a-wildcard-domain-and-why-should-i-use-it" href="#what-s-a-wildcard-domain-and-why-should-i-use-it">What’s a Wildcard Domain and Why Should I Use It?</a> </h2> <p>A wildcard domain is a <a href="https://datatracker.ietf.org/doc/html/rfc4592#section-2.1.1">domain that includes a wildcard character</a> (an asterisk, *) in place of a subdomain. For example, <code>*.example.com</code> is a wildcard domain that can cover <code>www.example.com</code>, <code>blog.example.com</code>, <code>shop.example.com</code>, and any other subdomain of example.com.</p> <p>Using wildcard domains offers several benefits:</p> <ul> <li><p><strong>Simplified Management</strong>: Instead of managing individual certificates for each subdomain, a single wildcard certificate can cover all subdomains, reducing administrative overhead.</p></li> <li><p><strong>Cost Efficiency</strong>: Wildcard certificates can be more cost-effective than purchasing individual certificates for each subdomain.</p></li> <li><p><strong>Flexibility</strong>: Wildcard domains provide the flexibility to add new subdomains without issuing a new certificate each time.</p></li> </ul> <h2 class="anchored"> <a name="what-can-i-now-do-with-heroku-acm-since-it-s-supported" href="#what-can-i-now-do-with-heroku-acm-since-it-s-supported">What Can I Now Do with Heroku ACM Since It’s Supported?</a> </h2> <p>With the new support for wildcard domains in Heroku ACM, you can now:</p> <ul> <li><p><strong>Easily Secure Multiple Subdomains</strong>: Automatically secure all your subdomains with a single wildcard certificate. This is particularly useful for applications that dynamically generate subdomains.</p></li> <li><p><strong>Streamline Certificate Management</strong>: Reduce the complexity of managing multiple certificates. Heroku ACM will handle the issuance, renewal, and management of your wildcard certificates, just as it does with regular certificates.</p></li> <li><p><strong>Enhance Security</strong>: Ensure that all your subdomains are consistently protected with HTTPS, improving the overall security posture of your applications.</p></li> </ul> <h2 class="anchored"> <a name="how-to-use-your-wildcard-domain-with-heroku-acm" href="#how-to-use-your-wildcard-domain-with-heroku-acm">How to use your Wildcard Domain with Heroku ACM</a> </h2> <p>Previously, you would've seen an error messaging when trying to add a wildcard domain with Heroku ACM enabled, or when trying to enable Heroku ACM when your app was associated to a wildcard domain. </p> <p>Now, you can follow the typical steps to <a href="https://devcenter.heroku.com/articles/custom-domains#add-a-custom-domain-with-a-subdomain">add a custom domain</a> to your Heroku app using the following command: </p> <pre><code>$ heroku domains:add *.example.com -a example-app </code></pre> <p>Once the domain is added, you can <a href="https://devcenter.heroku.com/articles/automated-certificate-management#common-runtime">enable Heroku ACM</a> using the following command: </p> <pre><code>$ heroku certs:auto:enable </code></pre> <p>And just like that, you can utilize your wildcard domain and still all of your certificates managed by Heroku!</p> <h2 class="anchored"> <a name="wildcard-domain-support-for-private-spaces" href="#wildcard-domain-support-for-private-spaces">Wildcard Domain Support for Private Spaces</a> </h2> <p>At the time of this post, Wildcard Domain support in Heroku ACM is only available for our Common Runtime Customers. </p> <p>Support for Wildcard Domains for Private Spaces will be coming soon as part of our focus on improving the entire Private Spaces platform. You can find more details about <a href="https://github.com/heroku/roadmap/issues/130">that project</a> on our <a href="https://github.com/heroku/roadmap">GitHub Public Roadmap</a>.</p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>The addition of wildcard domain support to Heroku ACM significantly enhances our platform's networking capabilities. Heroku is committed to making it easier to manage and secure your application's incoming and outgoing networking connections. This change, along with our recent addition of <a href="https://blog.heroku.com/heroku-http2-public-beta">HTTP/2</a> and our <a href="https://blog.heroku.com/router-2dot0-the-road-to-beta">new router</a> are all related to the investment Heroku is making to modernize our feature offerings. </p> <p>This change was driven by feedback from the <a href="https://github.com/heroku/roadmap/issues/39">Heroku Public GitHub roadmap</a>. We encourage you to keep an eye on our where you can see the features we are working on and provide your input. Your feedback is invaluable and helps shape the future of Heroku.</p> </description> <author>Ethan Limchayseng</author> </item> <item> <title>Testing a React App in Chrome with Heroku CI</title> <link>https://blog.heroku.com/testing-react-app-chrome-heroku-ci</link> <pubDate>Thu, 19 Sep 2024 21:36:00 GMT</pubDate> <guid>https://blog.heroku.com/testing-react-app-chrome-heroku-ci</guid> <description><p>When building web applications, unit testing your individual components is certainly important. However, end-to-end testing provides assurance that the final user experience of your components chained together matches the expected behavior. Testing web application behavior locally in your browser can be helpful, but this approach isn’t efficient or reliable, especially as your application grows more complex.</p> <p>Ideally, end-to-end tests in your browser are automated and integrated into your CI pipeline. Every time you commit a code change, your tests will run. Passing tests gives you the confidence that the application — as your end users experience it — behaves as expected.</p> <!-- more --> <p>With Heroku CI, you can run end-to-end tests with headless Chrome. The <a href="https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-chrome-for-testing">Chrome for Testing Heroku Buildpack</a> installs Google Chrome Browser (<code>chrome</code>) and <code>chromedriver</code> in a Heroku app. You can learn more about this Heroku Buildpack in a <a href="https://blog.heroku.com/improved-browser-testing-on-heroku-with-chrome">recent post</a>.</p> <p>In this article, we’ll walk through the simple steps for using this Heroku Buildpack to perform basic end-to-end testing for a React application in Heroku CI.</p> <!-- more --> <h2 class="anchored"> <a name="brief-introduction-to-our-react-app" href="#brief-introduction-to-our-react-app">Brief Introduction to our React App</a> </h2> <p>Since this is a simple walkthrough, we’ve built a very simple React application, consisting of a single page with a link and a form. The form has a text input and a submit button. When the user enters their name in the text input and submits the form, the page displays a simple greeting with the name included.</p> <p>It looks like this:</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781358-2-greeting-app.png" alt="2-greeting-app"> <img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781358-1-greeting-app.png" alt="1-greeting-app"></p> <p>Super simple, right? What we want to focus on, however, are end-to-end tests that validate the end-user experience for the application. To test our application, we use <a href="https://jestjs.io/">Jest</a> (a popular JavaScript testing framework) and <a href="https://pptr.dev/">Puppeteer</a> (a library for running headless browser testing in either Chrome or Firefox).</p> <p>If you want to download the simple source code and tests for this application, you can check out this <a href="https://github.com/heroku-examples/chrome-for-testing-example">GitHub repository</a>.</p> <p>The code for this simple page is in <code>src/App.js</code>:</p> <pre><code class="language-javascript">import React, { useState } from 'react'; import { Container, Box, TextField, Button, Typography, Link } from '@mui/material'; function App() { const [name, setName] = useState(''); const [greeting, setGreeting] = useState(''); const handleSubmit = (e) =&gt; { e.preventDefault(); setGreeting(`Nice to meet you, ${name}!`); }; return ( &lt;Container maxWidth="sm" style={{ marginTop: '50px' }}&gt; &lt;Box textAlign="center"&gt; &lt;Typography variant="h4" gutterBottom&gt; Welcome to the Greeting App &lt;/Typography&gt; &lt;Link href="https://pptr.dev/" rel="noopener"&gt; Puppeteer Documentation &lt;/Link&gt; &lt;Box component="form" onSubmit={handleSubmit} mt={3}&gt; &lt;TextField name="name" label="What is your name?" variant="outlined" fullWidth value={name} onChange={(e) =&gt; setName(e.target.value)} margin="normal" /&gt; &lt;Button variant="contained" color="primary" type="submit" fullWidth&gt; Say hello to me &lt;/Button&gt; &lt;/Box&gt; {greeting &amp;&amp; ( &lt;Typography id="greeting" variant="h5" mt={3}&gt; {greeting} &lt;/Typography&gt; )} &lt;/Box&gt; &lt;/Container&gt; ); } export default App; </code></pre> <h2 class="anchored"> <a name="running-in-browser-end-to-end-tests-locally" href="#running-in-browser-end-to-end-tests-locally">Running In-Browser End-to-End Tests Locally</a> </h2> <p>Our simple set of tests is in a file called <code>src/tests/puppeteer.test.js</code>. The file contents look like this:</p> <pre><code class="language-javascript">const ROOT_URL = 'http://localhost:8080'; describe('Page tests', () =&gt; { const inputSelector = 'input[name="name"]'; const submitButtonSelector = 'button[type="submit"]'; const greetingSelector = 'h5#greeting'; const name = 'John Doe'; beforeEach(async () =&gt; { await page.goto(ROOT_URL); }); describe('Puppeteer link', () =&gt; { it('should navigate to Puppeteer documentation page', async () =&gt; { await page.click('a[href="https://pptr.dev/"]'); await expect(page.title()).resolves.toMatch('Puppeteer | Puppeteer'); }); }); describe('Text input', () =&gt; { it('should display the entered text in the text input', async () =&gt; { await page.type(inputSelector, name); // Verify the input value const inputValue = await page.$eval(inputSelector, el =&gt; el.value); expect(inputValue).toBe(name); }); }); describe('Form submission', () =&gt; { it('should display the "Hello, X" message after form submission', async () =&gt; { const expectedGreeting = `Hello, ${name}.`; await page.type(inputSelector, name); await page.click(submitButtonSelector); await page.waitForSelector(greetingSelector); const greetingText = await page.$eval(greetingSelector, el =&gt; el.textContent); expect(greetingText).toBe(expectedGreeting); }); }); }); </code></pre> <p>Let’s highlight a few things from our testing code above:</p> <ul> <li>We’ve told Puppeteer to expect an instance of the React application to be up and running at <code>http://localhost:8080</code>. For each test in our suite, we direct the Puppeteer <code>page</code> to visit that URL.</li> <li>We test the link at the top of our page, ensuring that a link click redirects the browser to the correct external page (in this case, the Puppeteer Documentation page).</li> <li>We test the text input, verifying that a value entered into the field is retained as the input value.</li> <li>We test the form submission, verifying that the correct greeting is displayed after the user submits the form with a value in the text input.</li> </ul> <p>The tests are simple, but they are enough to demonstrate how headless in-browser testing ought to work.</p> <h3 class="anchored"> <a name="minor-modifications-to-code-package-json-code" href="#minor-modifications-to-code-package-json-code">Minor modifications to <code>package.json</code></a> </h3> <p>We bootstrapped this app by using <a href="https://create-react-app.dev/">Create React App</a>. However, we made some modifications to our <code>package.json</code> file just to make our development and testing process smoother. First, we modified the <code>start</code> script to look like this:</p> <pre><code class="language-bash">"start": "PORT=8080 BROWSER=none react-scripts start" </code></pre> <p>Notice that we specified the port that we want our React application to run on (<code>8080</code>) We also set <code>BROWSER=none</code>, to prevent the opening of a browser with our application every time we run this script. We won’t need this, especially as we move to headless testing in a CI pipeline.</p> <p>We also have our <code>test</code> script, which simply runs <code>jest</code>:</p> <pre><code class="language-bash">"test": "jest" </code></pre> <h3 class="anchored"> <a name="start-up-the-server-and-run-tests" href="#start-up-the-server-and-run-tests">Start up the server and run tests</a> </h3> <p>Let’s spin up our server and run our tests. In one terminal, we start the server:</p> <pre><code class="language-bash">~/project$ npm run start Compiled successfully! You can now view project in the browser. Local: http://localhost:8080 On Your Network: http://192.168.86.203:8080 Note that the development build is not optimized. To create a production build, use npm run build. webpack compiled successfully </code></pre> <p>With our React application running and available at <code>http://localhost:8080</code>, we run our end-to-end tests in a separate terminal:</p> <pre><code class="language-bash">~/project$ npm run test FAIL src/tests/puppeteer.test.js Page tests Puppeteer link ✓ should navigate to Puppeteer documentation page (473 ms) Text input ✓ should display the entered text in the text input (268 ms) Form submission ✕ should display the "Hello, X" message after form submission (139 ms) ● Page tests › Form submission › should display the "Hello, X" message after form submission expect(received).toBe(expected) // Object.is equality Expected: "Hello, John Doe." Received: "Nice to meet you, John Doe!" 36 | await page.waitForSelector(greetingSelector); 37 | const greetingText = await page.$eval(greetingSelector, el =&gt; el.textContent); &gt; 38 | expect(greetingText).toBe(expectedGreeting); | ^ 39 | }); 40 | }); 41 | }); at Object.toBe (src/tests/puppeteer.test.js:38:28) Test Suites: 1 failed, 1 total Tests: 1 failed, 2 passed, 3 total Snapshots: 0 total Time: 1.385 s, estimated 2 s Ran all test suites. </code></pre> <p>And… we have a failing test. It looks like our greeting message is wrong. We fix our code in <code>App.js</code> and then run our tests again.</p> <pre><code class="language-bash">~/project$ npm run test &gt; project@0.1.0 test &gt; jest PASS src/tests/puppeteer.test.js Page tests Puppeteer link ✓ should navigate to Puppeteer documentation page (567 ms) Text input ✓ should display the entered text in the text input (260 ms) Form submission ✓ should display the "Hello, X" message after form submission (153 ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 1.425 s, estimated 2 s Ran all test suites. </code></pre> <h3 class="anchored"> <a name="combine-server-startup-and-test-execution" href="#combine-server-startup-and-test-execution">Combine server startup and test execution</a> </h3> <p>We’ve fixed our code, and our tests are passing. However, starting up the server and running tests should be a single process, especially as we intend to run this in a CI pipeline. To serialize these two steps, we’ll use the <a href="https://www.npmjs.com/package/start-server-and-test">start-server-and-test</a> package. With this package, we can use a single script command to start our server, wait for the URL to be ready, and then run our tests. Then, when the test run finishes, it stops the server.</p> <p>We install the package and then add a new line to the <code>scripts</code> in our <code>package.json</code> file:</p> <pre><code class="language-bash">"test:ci": "start-server-and-test start http://localhost:8080 test" </code></pre> <p>Now, running <code>npm run test:ci</code> invokes the <code>start-server-and-test</code> package to first start up the server by running the start script, waiting for <code>http://localhost:8080</code> to be available, and then running the <code>test</code> script.</p> <p>Here is what it looks like to run this command in a single terminal window:</p> <pre><code class="language-bash">~/project$ npm run test:ci &gt; project@0.1.0 test:ci &gt; start-server-and-test start http://localhost:8080 test 1: starting server using command "npm run start" and when url "[ 'http://localhost:8080' ]" is responding with HTTP status code 200 running tests using command "npm run test" &gt; project@0.1.0 start &gt; PORT=8080 BROWSER=none react-scripts start Starting the development server... Compiled successfully! You can now view project in the browser. Local: http://localhost:8080 On Your Network: http://172.16.35.18:8080 Note that the development build is not optimized. To create a production build, use npm run build. webpack compiled successfully &gt; project@0.1.0 test &gt; jest PASS src/tests/puppeteer.test.js Page tests Puppeteer link ✓ should navigate to Puppeteer documentation page (1461 ms) Text input ✓ should display the entered text in the text input (725 ms) Form submission ✓ should display the "Hello, X" message after form submission (441 ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 4.66 s Ran all test suites. </code></pre> <p>Now, our streamlined testing process runs with a single command. We’re ready to try our headless browser testing with Heroku CI.</p> <h2 class="anchored"> <a name="running-our-tests-in-heroku-ci" href="#running-our-tests-in-heroku-ci">Running Our Tests in Heroku CI</a> </h2> <p>Getting our testing process up and running in Heroku CI requires only a few simple steps.</p> <h3 class="anchored"> <a name="add-code-app-json-code-file" href="#add-code-app-json-code-file">Add <code>app.json</code> file</a> </h3> <p>We need to add a file to our code repository. The file, <code>app.json</code>, is in our project root folder. It looks like this:</p> <pre><code class="language-javascript">{ "environments": { "test": { "buildpacks": [ { "url": "heroku-community/chrome-for-testing" }, { "url": "heroku/nodejs" } ], "scripts": { "test": "npm run test:ci" } } } } </code></pre> <p>In this file, we specify the buildpacks that we will need for our project. We make sure to add the <a href="https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-chrome-for-testing">Chrome for Testing buildpack</a> and the <a href="https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-nodejs">Node.js buildpack</a>. Then, we specify what we want Heroku’s execution of a test script command to do. In our case, we want Heroku to run the <code>test:ci</code> script we’ve defined in our <code>package.json</code> file.</p> <h3 class="anchored"> <a name="create-a-heroku-pipeline" href="#create-a-heroku-pipeline">Create a Heroku pipeline</a> </h3> <p>In the Heroku dashboard, we click <strong>New ⇾ Create new pipeline</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781407-3-create-new-pipeline.png" alt="3-create-new-pipeline"></p> <p>We give our pipeline a name, and then we search for and select the GitHub repository that will be associated with our pipeline. You can fork our demo repo, and then use your fork for your pipeline.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781418-4-create-pipeline.png" alt="4-create-pipeline"></p> <p>After finding our GitHub repo, we click <strong>Connect</strong> and then <strong>Create pipeline</strong>.</p> <h3 class="anchored"> <a name="add-an-app-to-the-pipeline" href="#add-an-app-to-the-pipeline">Add an app to the pipeline</a> </h3> <p>Next, we need to add an app to our pipeline. We’ll add it to the Staging phase of our pipeline.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781428-5-add-app-to-pipeline.png" alt="5-add-app-to-pipeline"></p> <p>We click <strong>Create new app</strong>…</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781440-6-create-new-app.png" alt="6-create-new-app"></p> <p>This app will use the GitHub repo that we’ve already connected to our pipeline. We choose a name and region for our app and then click <strong>Create app</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781448-7-create-app.png" alt="7-create-app"></p> <p>With our Heroku app added to our pipeline, we’re ready to work with Heroku CI.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781456-8-staging-app.png" alt="8-staging-app"></p> <h3 class="anchored"> <a name="enable-heroku-ci" href="#enable-heroku-ci">Enable Heroku CI</a> </h3> <p>In our pipeline page navigation, we click <strong>Tests</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781463-9-tests.png" alt="9-tests"></p> <p>Then, we click <strong>Enable Heroku CI</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781473-10-enable-heroku-ci.png" alt="10-enable-heroku-ci"></p> <p>Just like that, Heroku CI is up and running.</p> <style type="text/css" scoped> .list-checks { list-style:none; padding-left:0; } .list-checks li{ position:relative; padding-left: 1.5em; } .list-checks li:before { position:absolute; left:0; top:0; height:1em; width:1em; content:'\2705\0020'; color: #50C878; } </style> <ul class="list-checks"> <li>We’ve created our Heroku pipeline.</li> <li>We’ve connected our GitHub repo.</li> <li>We’ve created our Heroku app.</li> <li>We’ve enabled Heroku CI.</li> <li>We have an `app.json` file that specifies our need for the Chrome for Testing and Node.js buildpacks, and tells Heroku what to do when executing the `test` script.</li> </ul> <p>That’s everything. It’s time to run some tests!</p> <h3 class="anchored"> <a name="run-tests-manual-trigger" href="#run-tests-manual-trigger">Run tests (manual trigger)</a> </h3> <p>On the <strong>Tests</strong> page for our Heroku pipeline, we click the <strong>New Test ⇾ Start Test Run</strong> to manually trigger a run of our test suite.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781486-11-test-run.png" alt="11-test-run"></p> <p>As Heroku displays the output for this test run, we see immediately that it has detected our need for the Chrome for Testing buildpack and begins installing Chrome and all its dependencies.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781497-12-test-running.png" alt="12-test-running"></p> <p>After Heroku installs our application dependencies and builds the project, it executes <code>npm run test:ci</code>. This runs <code>start-server-and-test</code> to spin up our React application and then run our Jest/Puppeteer tests.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1726781497-13-test-succeed.png" alt="13-test-succeed"></p> <p>Success! Our end-to-end tests run, using headless Chrome via the Chrome for Testing Heroku Buildpack.</p> <p>By integrating end-to-end tests in our Heroku CI pipeline, any push to our GitHub repo will trigger a run of our test suite. We have immediate feedback in case any end-to-end tests fail, and we can configure our pipeline further to use review apps or promote staging apps to production.</p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>As the end-to-end testing in your web applications grows more complex, you’ll increasingly rely on headless browser testing that runs automatically as a part of your CI pipeline. Manually running tests is neither reliable nor scalable. Every developer on the team needs a singular, central place to run the suite of end-to-end tests. Automating these tests in Heroku CI is the way to go, and your testing capabilities just got a boost with the Chrome for Testing Buildpack.</p> <p>When you’re ready to start running your apps on Heroku and taking advantage of Heroku CI, <a href="https://signup.heroku.com/">sign up today</a>.</p> </description> <author>Julián Duque</author> </item> <item> <title>Discover Heroku at Dreamforce 2024</title> <link>https://blog.heroku.com/heroku-dreamforce-2024</link> <pubDate>Thu, 05 Sep 2024 14:31:00 GMT</pubDate> <guid>https://blog.heroku.com/heroku-dreamforce-2024</guid> <description><p><a href="https://www.salesforce.com/dreamforce/">Dreamforce</a> comes to San Francisco this September 17-19. Heroku, a Salesforce company, has a packed schedule with a variety of sessions and activities designed to enhance your knowledge of our platform and integrations with Salesforce technologies. </p> <p>Learn more about Heroku’s latest innovations by adding us to your agenda via the <a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog?search.product=1643812029327020eBEJ">Dreamforce Agenda Builder</a>. Here's where you can find Heroku at Dreamforce 2024.</p> <!-- more --> <!-- nothing to see here --> <h2 class="anchored"> <a name="heroku-demos-in-the-trailblazer-forest" href="#heroku-demos-in-the-trailblazer-forest">Heroku Demos in the Trailblazer Forest</a> </h2> <p>Whether you are a full-stack Salesforce Developer or just prefer the CLI the Heroku demo booth is the best place to kick off Dreamforce. Dive into the latest product innovations and personalized live demos showcasing Heroku and <a href="https://www.salesforce.com/data/">Data Cloud</a> plus how Heroku can integrate with the <a href="https://blog.heroku.com/mastering-api-integration-salesforce-heroku-mulesoft-anypoint-flex-gateway">MuleSoft Anypoint Flex Gateway</a>. This is also a great opportunity to interact with product managers and get your questions answered.</p> <p>Interested in AWS+Heroku? Be sure to stop by the Heroku demo at the AWS booth.</p> <h2 class="anchored"> <a name="camp-mini-hacks" href="#camp-mini-hacks">Camp Mini Hacks</a> </h2> <p>If you're a developer looking to challenge yourself, the Camp Mini Hacks are a must-visit. Connect with like-minded developers and tackle code challenges using Heroku and Salesforce technologies: Solve the Mega Hack Challenge, where you'll integrate an Heroku Application with MuleSoft Anypoint Flex Gateway and Prompt Builder. It's a hands-on way to learn and showcase your skills.</p> <h2 class="anchored"> <a name="breakout-sessions" href="#breakout-sessions">Breakout Sessions</a> </h2> <p>Heroku's Breakout Sessions are perfect for those wanting to dive deeper into the platform's capabilities. Learn how other customers have successfully built and scaled their applications using Heroku. These sessions are informative and provide real-world insights into maximizing the potential of the platform.</p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1719342633607001xfyg-heroku-next-gen-for-cloud-native-workloads-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1719342633607001xfyg-heroku-next-gen-for-cloud-native-workloads-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1719342633607001XFYG">Heroku Next-Gen for Cloud Native Workloads</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Chris Peterson</strong>, Senior Director, Product Management, Salesforce</li> <li> <strong>Ethan Limchayseng</strong>, Director Product Management - Heroku Runtime, Salesforce</li> <li> <strong>Vivek Viswanathan</strong>, Director Product Management, Salesforce</li> </ul> <p>Learn about Heroku's plan to iterate and expand our platform with our next-gen stack powered by Kubernetes, Heroku-native Data Cloud integration, .NET support, and cutting-edge Postgres offerings.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1719342633607001XFYG" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915980733001qd5n-maximizing-sales-potential-with-the-power-of-integration-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915980733001qd5n-maximizing-sales-potential-with-the-power-of-integration-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915980733001QD5N">Maximizing Sales Potential with the Power of Integration</a> </h3> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Alex Solomon</strong>, Software Engineering Leader, Cisco Meraki</li> <li> <strong>MK Korgaonkar</strong>, Data Integrations Product Manager, Cisco</li> </ul> <p>Cisco created an integrated sales ecosystem that empowers high-touch sellers across silos to operate as one cohesive team, enabling cross-selling and promoting revenue growth across the organization.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915980733001QD5N" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915853770001qwcv-engaging-customers-with-lamborghini-s-unica-experience-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915853770001qwcv-engaging-customers-with-lamborghini-s-unica-experience-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915853770001QwcV">Engaging Customers with Lamborghini’s “Unica” Experience</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Lorenzo Cavicchi</strong>, Head of IT Commercial &amp; Supporting, Automobili Lamborghini S.p.A.</li> <li> <strong>David Baliles</strong>, Distinguished Technical Architect, Salesforce</li> <li> <strong>Filippo Tonutti</strong>, Next Generation Customer Journey, Automobili Lamborghini</li> </ul> <p>See how Lamborghini's Unica app, built on Heroku, engages drivers in real time with seamless, digital in-car integration. Discover how collected data enhances Lamborghini's B2B2C model and ecosystem.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915853770001QwcV" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1719342634333001xkoq-build-a-golden-customer-record-using-heroku-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1719342634333001xkoq-build-a-golden-customer-record-using-heroku-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1719342634333001XKOq">Build a Golden Customer Record Using Heroku</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Barry Sheehan</strong>, Chief Commercial Officer, Showoff</li> <li> <strong>Martin Eley</strong>, Principal Technical Architect, Salesforce</li> <li> <strong>Tobias Lilley</strong>, Heroku Sales UK&amp;I, Salesforce</li> </ul> <p>Combine records from multiple systems in real time and use Heroku to create a transactional, golden customer record for activation in Data Cloud.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1719342634333001XKOq" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h2 class="anchored"> <a name="theater-sessions" href="#theater-sessions">Theater Sessions</a> </h2> <p>Explore how Heroku powers the Next-Gen Platform and the C360. Theater Sessions presentations are part of a joint Mini Theater experience, offering exclusive content that highlights the integration of Heroku with Salesforce's broader ecosystem.</p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1721140814463001pvgz-secure-apis-on-heroku-with-mulesoft-flex-gateway-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1721140814463001pvgz-secure-apis-on-heroku-with-mulesoft-flex-gateway-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1721140814463001PVgZ">Secure APIs on Heroku with MuleSoft Flex Gateway</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Jonathan Jenkins</strong>, Senior Success Architect, Salesforce</li> <li> <strong>Parvez Mohamed</strong>, Director of Product Management, Salesforce</li> </ul> <p>Learn to deploy MuleSoft Flex Gateway on Heroku, connect private and secure API apps, and manage access via AnyPoint controls.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1721140814463001PVgZ" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915854315001q9tf-securely-integrating-heroku-apps-with-data-cloud-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915854315001q9tf-securely-integrating-heroku-apps-with-data-cloud-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915854315001Q9tF">Securely Integrating Heroku Apps with Data Cloud</a> </h3> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Vivek Viswanathan</strong>, Director of Product Management, Salesforce</li> <li> <strong>David Baliles</strong>, Distinguished Technical Architect, Salesforce</li> </ul> <p>Learn how to connect Heroku apps with Data Cloud using Flows, Events, and Apex to enhance and extend your data management abilities.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915854315001Q9tF" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915854849001q156-deliver-innovation-with-heroku-and-signature-support-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915854849001q156-deliver-innovation-with-heroku-and-signature-support-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915854849001Q156">Deliver Innovation with Heroku and Signature Support</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Gabriel Avila</strong>, Senior Customer Solutions Manager, Salesforce</li> <li> <strong>Altaf Somani</strong>, Head of Software Development, Goosehead Insurance</li> </ul> <p>Learn how Goosehead Insurance improved customer experience with the Heroku PaaS, improving issue identification and resolution by 75% and boosting response time by 55% with the agent enablement app.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915854849001Q156" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915855386001q1lq-optimize-your-sales-strategy-with-heroku-salesforce-and-ai-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915855386001q1lq-optimize-your-sales-strategy-with-heroku-salesforce-and-ai-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915855386001Q1LQ">Optimize Your Sales Strategy with Heroku, Salesforce, and AI</a> </h3> <p class="light df-sfplus">Also available on Salesforce+</p> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Xiaolin Xu</strong>, Senior Software Engineer, Salesforce</li> </ul> <p>Use the power of vector search to analyze historical sales data and identify trends in customer behavior. Use these insights to make smarter sales forecasts and reduce churn.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915855386001Q1LQ" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h2 class="anchored"> <a name="workshops" href="#workshops">Workshops</a> </h2> <p>For a more interactive learning experience, Heroku's Workshops are the place to be. These hands-on sessions will teach you how to build AI applications and integrate Heroku with Salesforce Data Cloud. It's a unique opportunity to get practical experience with expert guidance.</p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915856455001qkc8-improve-customer-engagement-with-heroku-and-data-cloud-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915856455001qkc8-improve-customer-engagement-with-heroku-and-data-cloud-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915856455001Qkc8">Improve Customer Engagement with Heroku and Data Cloud</a> </h3> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Vivek Viswanathan</strong>, Director of Product Management, Salesforce</li> <li> <strong>David Baliles</strong>, Distinguished Technical Architect, Salesforce</li> </ul> <p>Learn how to ingest Heroku data into Data Cloud, deploy a web app, and get real-time interactions. By the end, you'll know how to connect Heroku to Data Cloud to boost your business.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915856455001Qkc8" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915855922001qjqx-build-agentic-ai-applications-with-heroku-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1718915855922001qjqx-build-agentic-ai-applications-with-heroku-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915855922001QjQx">Build Agentic AI Applications with Heroku</a> </h3> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Rand Fitzpatrick</strong>, Senior Director, Product Management, Salesforce</li> <li> <strong>Mauricio Gomes</strong>, Principal Engineer, Salesforce</li> <li> <strong>Marcus Blankenship</strong>, Director of AI/ML Engineering, Salesforce</li> </ul> <p>Discover how to use Heroku to enhance your AI with code execution and function use, seamlessly integrated into your Heroku applications.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1718915855922001QjQx" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h2 class="anchored"> <a name="roundtable" href="#roundtable">Roundtable</a> </h2> <p>Gather with like-minded attendees to discuss a particular topic. Opportunity to network and share best practices and common challenges facing the Salesforce community. Each table is moderated by an expert.</p> <h3 class="anchored"> <a name="a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1720537168662001fite-heroku-for-it-leaders-boost-scalability-and-cost-efficiency-a" href="#a-href-https-reg-salesforce-com-flow-plus-df24-sessioncatalog-page-catalog-session-1720537168662001fite-heroku-for-it-leaders-boost-scalability-and-cost-efficiency-a"></a><a href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1720537168662001fITe">Heroku for IT Leaders: Boost Scalability and Cost Efficiency</a> </h3> <div class="df-presenters-label">Presenters:</div> <ul class="df-presenters"> <li> <strong>Dan Mehlman</strong>, Director, Heroku Technical Architecture, Salesforce</li> <li> <strong>Brandon Schoen</strong>, Director, Heroku Professional Services, Salesforce</li> </ul> <p>Discover how you can achieve limitless scalability by using the right tools for the job with Heroku. Save money on DevOps and infrastructure management, allowing you to focus on your product.</p> <p class="add-to-agenda"> <a class="btn btn-xs btn-primary-lightning" href="https://reg.salesforce.com/flow/plus/df24/sessioncatalog/page/catalog/session/1720537168662001fITe" style="text-decoration: none;"> <span class="glyphicon" style="font-family: 'Glyphicons Halflings';position: relative;line-height: 1;top:2px;"></span> Add to Agenda </a> </p> <h2 class="anchored"> <a name="final-thoughts" href="#final-thoughts">Final Thoughts</a> </h2> <p><a href="https://www.salesforce.com/dreamforce/">Dreamforce 2024</a> is shaping up to be an exciting event, especially for <a href="https://www.heroku.com/ctos">IT leaders</a> and <a href="https://www.heroku.com/developers">developers using Heroku</a> for their development needs. Make sure to add these sessions to your schedule and experience the best of what Heroku has to offer!</p> <style> ul.df-presenters { list-style: none; font-size: 0.9em; line-height: 1.4; padding: 0; margin-bottom:18px; } div.df-presenters-label{ font-weight:bold; font-size: 0.7em; line-height: 1.4; text-transform: uppercase; display:block; padding: 0; color: #7C858C; } p.df-sfplus { margin-bottom:12px; margin-top:-6px; color:#215CA0; font-size:0.9em; line-height:1.2; } .add-to-agenda .btn-xs { padding-left:18px; padding-right:18px; } </style> </description> <author>Emily Todd</author> </item> <item> <title>Updating Twelve-Factor: A Call for Participation</title> <link>https://blog.heroku.com/updating-twelve-factor-call-for-participation</link> <pubDate>Wed, 28 Aug 2024 17:39:00 GMT</pubDate> <guid>https://blog.heroku.com/updating-twelve-factor-call-for-participation</guid> <description><p>Over a decade ago, Heroku co-founder Adam Wiggins published the <a href="https://blog.heroku.com/twelve-factor-apps">Twelve-Factor App methodology</a> as a way to codify the <a href="https://en.wikipedia.org/wiki/Twelve-Factor_App_methodology">best practices for writing SaaS applications</a>. In that time, cloud-native has become the default for all new applications, and technologies like Kubernetes are widespread. Best-practices for software have evolved, and we believe that Twelve-Factor also needs to evolve — this time with you, the community.</p> <p>Originally, the Twelve-Factor manifesto focused on building deployable applications without thinking about deployment, and while its core concepts are still <em>remarkably relevant</em>, the examples are another story. Industry practices have evolved considerably and many of the examples reflect outdated practices. Rather than help illustrate the concepts, these outdated examples make the concepts look obsolete.</p> <p>It is time to modernize Twelve-Factor for the next decade of technological advancements. </p> <p>Like art restoration, the majority of the work will first focus on <em>removing accumulated cruft</em> so that the original intent can shine through. For the first step in the restoration, we plan to remove the references to outdated technology and update the examples to reflect modern industry practices. Next, we plan to clearly separate the core concepts from the examples. This will make it easier to evolve the examples in the future without disturbing the timeless philosophy at the core of the manifesto. Just like how microservices are a set of separate services that are loosely coupled together so they can be updated independently, we’re applying this same thinking to Twelve-Factor so the specifications can be separate from examples and reference implementations.</p> <p>While we <a href="https://blog.heroku.com/twelve-factor-apps">originally wrote Twelve-Factor</a> on our own, it’s now time that we define and implement these principles with the community — taking lessons that we’ve all learned from building and operating modern apps and systems and sharing them. Let’s do this together, email to join <a href="mailto:twelve-factor@googlegroups.com">twelve-factor@googlegroups.com</a> and tag #12factor (<a href="https://x.com/hashtag/12factor">X</a> / <a href="https://www.linkedin.com/feed/hashtag/?keywords=12factor">LinkedIn</a>) or <a href="https://x.com/heroku">@heroku</a> when you publish blogs with your perspectives and ideas!</p> <p>We look forward to working together to make the new version of the manifesto awesome! </p> </description> <author>Vish Abrams, Chief Architect, Heroku</author> </item> <item> <title>Data Residency Concerns for Global Applications</title> <link>https://blog.heroku.com/data-residency-concerns-global-applications</link> <pubDate>Thu, 22 Aug 2024 16:58:10 GMT</pubDate> <guid>https://blog.heroku.com/data-residency-concerns-global-applications</guid> <description><h2 class="anchored"> <a name="compliance-is-possible-with-the-right-provider" href="#compliance-is-possible-with-the-right-provider">Compliance Is Possible with the Right Provider</a> </h2> <p>Because today’s companies operate in the cloud, they can reach a global audience with ease. At any given moment, you could have customers from Indiana, Indonesia, and Ireland using your services or purchasing your products. With such a widespread customer base, your business data will inevitably cross borders. What does this mean for data privacy, protection, and compliance?</p> <p>If your company deals with customers on a global — or at the very least, multi-national — scale, then understanding the concept of <strong>data residency</strong> is essential. Data residency deals with the laws and regulations that dictate where data must be stored and managed. Compliance with the relevant laws keeps you in good business standing and builds trust with your customers.</p> <!-- more --> <p>In this post, we’ll explore the concept of data residency. We’ll look at the implications of a global customer base on your compliance footprint and efforts. At first glance, achieving compliance with data residency requirements may seem like an insurmountable task. However, leveraging cloud regions from the right cloud provider — such as through Private Dynos from Heroku Enterprise — can help relieve your data residency headaches.</p> <p>Before we begin, and as a reminder, this blog should not be taken as legal advice, and you should always seek your own counsel on matters of legal and regulatory compliance. Let’s start with a brief primer on the core concept for this post.</p> <h2 class="anchored"> <a name="what-is-data-residency" href="#what-is-data-residency">What is data residency?</a> </h2> <p>Data residency refers to the legal requirements that dictate where your data may be stored and processed. When it comes to data management — which is how you handle data throughout its lifecycle — taking into account data residency concerns is essential. Ultimately, this comes down to understanding where a user of your application resides, and subsequently where their data must be stored and processed.</p> <p>When people think of data protection laws, many immediately think of the <a href="https://gdpr-info.eu/">General Data Protection Regulation (GDPR)</a> and <a href="https://oag.ca.gov/privacy/ccpa">California Consumer Privacy Act (CCPA)</a>. GDPR has certain requirements about how organizations handle and process the data of <strong>individuals residing within the EU</strong>. The CCPA regulates how businesses handle the <strong>personal data of California residents</strong>.</p> <p>GDPR and CCPA have stringent rules about how data is processed, but they do not necessarily impose strict requirements on where data <em>resides</em>, as long as that data has been processed in a compliant manner. However, many countries have strict data residency laws regarding certain kinds of data. For example, China’s <a href="https://pro.bloomberglaw.com/insights/privacy/china-personal-information-protection-law-pipl-faqs/">Personal Information Protection Law</a> requires handlers of certain types of personally identifiable information (PII) of a Chinese citizen be stored within China’s borders. </p> <p>Tangentially related to the concept of data residency are two other concepts worth noting:</p> <ul> <li> <strong>Data sovereignty</strong> deals with a nation’s legal authority and jurisdiction over data, regardless of where it is physically located.</li> <li> <strong>Digital rights</strong> emphasizes the individual’s autonomy and authority over their personal data.</li> </ul> <h2 class="anchored"> <a name="why-does-data-residency-matter-for-compliance" href="#why-does-data-residency-matter-for-compliance">Why does data residency matter for compliance?</a> </h2> <p>Your enterprise may be dealing with data from residents or citizens of specific countries or with specific industries in countries that have strict requirements about where the data must be stored. These are data residency requirements, and businesses that operate internationally must comply with these requirements to avoid running afoul of the law.</p> <p>Compliance ensures that your data handling aligns with local laws and regulations. It helps you avoid legal penalties, and it builds trust among your global customers.</p> <p>What happens if you don’t comply? The risks of non-compliance are significant. Non-compliance can have far-reaching consequences for any business, including:</p> <ul> <li>Hefty fines</li> <li>Legal disputes</li> <li>Possible loss of a license to operate as a business</li> <li>Erosion of customer trust</li> <li>Damaged company reputation</li> </ul> <p>If your business has a global customer base, then data residency matters because compliance is a must. Managing your data in compliance is more than just a legal buffer; it’s foundational to business integrity and customer trust.</p> <h2 class="anchored"> <a name="how-cloud-regions-can-help-you-with-data-residency-compliance" href="#how-cloud-regions-can-help-you-with-data-residency-compliance">How cloud regions can help you with data residency compliance</a> </h2> <p>This brings us to the all-important concept of <strong>cloud regions</strong>. Leveraging cloud regions effectively could be a game-changer for your enterprise’s ability to meet data residency requirements, thereby maintaining compliance.</p> <p>When a cloud provider gives you the option of cloud regions, you can specify where your data is stored. This helps you to align your data handling practices with regional compliance laws and regulations.</p> <p>For example, if your customer is an EU resident, you might choose to store their data in an EU-based cloud region. If the sensitive data you process is sourced in India, then it might make sense to store that data in India, to satisfy local jurisdiction and compliance requirements.</p> <p>When you take advantage of cloud regions, you bring better and more granular control over your data. In addition, you likely boost application performance by using geographical proximity to optimize data access.</p> <p>Using cloud regions lets you scale operations internationally while maintaining compliance. You can be sure that each segment of your business adheres to the data protection standards of any given local jurisdiction.</p> <h2 class="anchored"> <a name="heroku-s-private-dynos-for-global-application-data-compliance" href="#heroku-s-private-dynos-for-global-application-data-compliance">Heroku’s Private Dynos for global application data compliance</a> </h2> <p>Heroku Enterprise offers <a href="https://www.heroku.com/dynos/private-spaces">dynos in Private Spaces</a>. These Private Dynos give you enhanced privacy and control, allowing your company to choose from the following <a href="https://devcenter.heroku.com/articles/regions">cloud regions</a>:</p> <ul> <li>Dublin, Ireland</li> <li>Frankfurt, Germany</li> <li>London, United Kingdom</li> <li>Montreal, Canada</li> <li>Mumbai, India</li> <li>Oregon, United States</li> <li>Singapore</li> <li>Sydney, Australia</li> <li>Tokyo, Japan</li> <li>Virginia, United States</li> </ul> <p>These options enable globally operating companies to maintain compliance across different jurisdictions.</p> <p>In addition to cloud regions, Heroku offers <a href="https://www.heroku.com/shield">Heroku Shield</a>, which provides additional security features necessary for high compliance operations. With Heroku Shield Private Spaces, Heroku maintains <a href="https://www.heroku.com/compliance">compliance certifications for PCI, HIPAA, ISO, and SOC</a>.</p> <p>As we’ve discussed, understanding and implementing adequate data residency measures is essential to your ability to operate. However, with cloud regions from a reliable and secure cloud provider platform, compliance is achievable.</p> <p>Taking advantage of Heroku’s various products — whether it’s Private Dynos or Heroku Shield — to address the various laws or regulations that apply to your organization can move you in the direction of maintaining compliance. In addition, by using these features to simplify your data management and data residency concerns, you’ll also level up your operational efficiency.</p> <p>Are you ready to see how Heroku can streamline your compliance efforts with Private Dynos and Heroku Shield? <a href="https://www.heroku.com/private-spaces#contact">Contact Heroku to find out more today</a>!</p> </description> <author>Ethan Limchayseng</author> </item> <item> <title>Building an Event-Driven Architecture with Managed Data Services</title> <link>https://blog.heroku.com/building-event-driven-architecture-managed-data-services</link> <pubDate>Wed, 14 Aug 2024 20:52:00 GMT</pubDate> <guid>https://blog.heroku.com/building-event-driven-architecture-managed-data-services</guid> <description><p>Modern applications have an unceasing buzz of user activity and data flows. Users send a flurry of one-click reactions to social media posts. Wearable tech and other IoT sensors work nonstop to transmit event data from their environments. Meanwhile, customers on e-commerce sites perform shopping cart actions or product searches which can bring immediate impact to operations. Today’s software organizations need the ability to process and respond to this rich stream of real-time data.</p> <p>That’s why they adopt an event-driven architecture (EDA) for their applications.</p> <!-- more --> <p>Long gone are the days of monolithic applications with components tightly coupled into a single, bloated piece of software. That approach leads to scalability issues, slower development cycles, and complex maintenance. Instead, today’s applications are built on <strong>decoupled microservices and components</strong> — individual parts of an application that communicate and operate independently, without direct knowledge of each other’s definitions or internal representations. The resulting system is resilient and easier to scale and manage.</p> <p>This is where EDA comes in. EDA enables efficient communication between these independent services, ensuring real-time data processing and seamless integration. With EDA, organizations leverage this decoupling to achieve the scalability and flexibility they need for their dynamic environments. And central to the tech stack for realizing EDA is Apache Kafka.</p> <p>In this post, we’ll explore the advantages of using Kafka for EDA applications. Then, we’ll look at how Apache Kafka on Heroku simplifies your task of getting up and running with the reliability and scalability to support global-scale EDA applications. Finally, we’ll offer a few tips to help pave the road as you move forward with implementation.</p> <h2 class="anchored"> <a name="kafka-s-advantages-for-event-driven-systems" href="#kafka-s-advantages-for-event-driven-systems">Kafka’s Advantages for Event-Driven Systems</a> </h2> <p>An EDA is designed to handle real-time data so that applications can respond instantly to changes and events. Boiled down to the basics, we can break down an EDA application to just a few key concepts:</p> <ul> <li>An <strong>event</strong> is data — often in the form of a simple message or a structured object — that represents something that has happened in the system. For example: a customer has placed an order, or a warehouse has confirmed inventory numbers for a product, or a medical device has raised a critical alert.</li> <li>A <strong>topic</strong> is a channel where an event is published. For example: orders, or confirmations, or vital signs.</li> <li>A <strong>producer</strong> is a component that publishes an event to a topic. For example: a web server, or a POS system, or a wearable fitness monitor.</li> <li>A <strong>consumer</strong> is a component that subscribes to a topic. It listens for a notification of an event, and then it kicks off some other process in response. For example: an email notification system, or a metrics dashboard, or a fulfillment warehouse.</li> </ul> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1723646320-Apache-Kafka-on-Heroku.png" alt="EDA design"></p> <h3 class="anchored"> <a name="decoupling-components" href="#decoupling-components">Decoupling components</a> </h3> <p>An EDA-based application primarily revolves around the main actors in the system: producers and consumers. With decoupling, these components simply focus on their own jobs, knowing nothing about the jobs of others.</p> <p>For example, the order processing API of an e-commerce site receives a new order from a customer. As a producer in an EDA application, the API simply needs to publish an event with the order data. It has no idea about how the order will be fulfilled or how the customer will be notified. On the other side of things, the fulfillment warehouse is a consumer listening for events related to new orders. It doesn’t know or care about who publishes those events. When a new order event arrives, the warehouse fulfills the order.</p> <p>By enabling this loose coupling between components, Kafka makes EDA applications incredibly modular. Kafka acts as a central data store for events, allowing producers to publish events and consumers to read them independently. This reduces the complexity of updates and maintenance. It also allows components to be scaled — vertically or horizontally — without impacting the entire system. New components can be tested with ease. With Kafka at the center, producers and consumers operate outside of it but within the EDA, facilitating efficient, real-time data processing.</p> <h3 class="anchored"> <a name="real-time-data-processing" href="#real-time-data-processing">Real-time data processing</a> </h3> <p>Kafka allows you to process and distribute large streams of data in real time. For applications that depend on up-to-the-second information, this ability is vital. Armed with the most current data, companies can make better decisions faster, improving both their operational efficiency and their customer experiences.</p> <h3 class="anchored"> <a name="fault-tolerance" href="#fault-tolerance">Fault tolerance</a> </h3> <p>For an EDA application to operate properly, the central broker — which handles the receipt of published events by notifying subscribed consumers — must be available and reliable. Kafka is designed for fault tolerance. It replicates data across multiple nodes, running as a cluster of synchronized and coordinated brokers. If one node fails, no data is lost. The system will continue to operate uninterrupted.</p> <p>Kafka’s built-in redundancy is part of what makes it so widely adopted by enterprises that have embraced the event-driven approach.</p> <h2 class="anchored"> <a name="introduction-to-apache-kafka-on-heroku" href="#introduction-to-apache-kafka-on-heroku">Introduction to Apache Kafka on Heroku</a> </h2> <p><a href="https://www.heroku.com/kafka">Apache Kafka on Heroku</a> is a fully managed Kafka service that developers — both in startups and established global enterprises — look to for <strong>ease of management and maintenance</strong>. With a <a href="https://www.heroku.com/managed-data-services">fully managed service</a>, developers can focus their time and efforts on application functionality rather than wrangling infrastructure.</p> <p><a href="https://elements.heroku.com/addons/heroku-kafka">Plans and configurations</a> for Apache Kafka on Heroku include <a href="https://devcenter.heroku.com/articles/multi-tenant-kafka-on-heroku#basic-plans">multi-tenant basic plans</a> as well as single-tenant private plans with higher capacity and network isolation or integration with <a href="https://www.heroku.com/shield">Heroku Shield</a> to meet <a href="https://www.heroku.com/compliance">compliance</a> needs.</p> <p>With Apache Kafka on Heroku, your EDA application will <strong>scale</strong> as demand fluctuates. Heroku manages Kafka's scalability by automatically adjusting the number of brokers in the cluster, making certain that sufficient capacity is available as data volume increases. This ensures that your applications can handle both seasonal spikes and sustained growth — without any disruption or need for configuration changes.</p> <p>Then, of course, we have <strong>reliability</strong>. Plans from the Standard-tier and above start with 3 Kafka brokers for redundancy, extending to as many 8 brokers for applications with more intensive fault tolerance needs. With data replicated across nodes, the impact of any node failure will be mitigated, ensuring your data remains intact and your application continues to run.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1723646675-Kafka-Brokers.png" alt="Standard-tier and above start with 3 Kafka brokers for redundancy"></p> <h2 class="anchored"> <a name="integration-best-practices" href="#integration-best-practices">Integration Best Practices</a> </h2> <p>When you design your EDA application to be powered by Kafka, a successful integration will ensure its smooth and efficient operation. When setting up Kafka for your event-driven system, keep in mind the following key practices:</p> <ul> <li> <strong>Define your data flow</strong>. As you begin your designs, map out clearly how data ought to move between producers and consumers. Remember that a consumer of one event can also act as a producer of another event. Producers can publish to multiple topics, and consumers can subscribe to multiple topics. When you’ve designed your data flows clearly, integrating Kafka will be seamless and bottleneck-free.</li> <li> <strong>Ensure data consistency and integrity</strong>. Take advantage of Kafka’s built-in features, such as <a href="https://kafka.js.org/docs/transactions">transactions</a>, <a href="https://docs.confluent.io/platform/7.6/schema-registry/index.html">topic and data schema management</a>, and <a href="https://docs.confluent.io/kafka/design/delivery-semantics.html#:%7E:text=If%20there%20is%20a%20system%20failure%2C%20messages%20are%20never%20lost,part%20of%20the%20system%20fails.">message delivery guarantees</a>. Using all that Kafka has to offer will help you reduce the risk of errors, ensuring that messages remain consistent and reliably delivered across your system.</li> <li> <strong>Monitor performance and log activity</strong>: Use monitoring tools to track key performance metrics, and leverage <a href="https://devcenter.heroku.com/articles/kafka-on-heroku#monitoring-via-logs">logging for Kafka’s operations</a>. Robust logging practices and continuous monitoring of your application will provide crucial performance insights and alert you of any system health issues.</li> </ul> <h2 class="anchored"> <a name="conclusion-bringing-it-all-together-with-heroku" href="#conclusion-bringing-it-all-together-with-heroku">Conclusion: Bringing It All Together with Heroku</a> </h2> <p>In this post, we've explored how pivotal Apache Kafka is as a foundation for event-driven architectures. By decoupling components and ensuring fault tolerance, Kafka ensures EDA-based applications are reliable and easily scalable. By looking to Heroku for its managed Apache Kafka service, enterprises can offload the infrastructure concerns to a trusted provider, freeing their developers up to focus on innovation and implementation.</p> <p>For more information about Apache Kafka on Heroku, <a href="https://heroku.github.io/kafka-demo/">view the demo</a> or <a href="https://www.heroku.com/contact-sales">contact our team</a> of implementation experts today. When you’re ready to get started, <a href="https://signup.heroku.com/">sign up for a new account</a>.</p> </description> <author>Jonathan Brown</author> </item> <item> <title>Mastering API Integration: Salesforce, Heroku, and MuleSoft Anypoint Flex Gateway</title> <link>https://blog.heroku.com/mastering-api-integration-salesforce-heroku-mulesoft-anypoint-flex-gateway</link> <pubDate>Mon, 29 Jul 2024 13:00:00 GMT</pubDate> <guid>https://blog.heroku.com/mastering-api-integration-salesforce-heroku-mulesoft-anypoint-flex-gateway</guid> <description><p>In today’s fast-paced digital world, companies are looking for ways to securely expose their APIs and microservices to the internet. MuleSoft Anypoint Flex Gateway is a powerful solution that solves this problem.</p> <p>Let's walk through deploying the Anypoint Flex Gateway on Heroku in a few straightforward steps. You'll learn how to connect your private APIs and microservices on the Heroku platform through the <a href="https://www.mulesoft.com/platform/api/flex-api-gateway">Anypoint Flex Gateway</a> and the <a href="https://www.mulesoft.com/platform/api/manager">Anypoint API Manager</a>, without the hassle of managing infrastructure. Get ready to unlock the potential of this potent pairing and, in the future, integrate it with Salesforce.</p> <!-- more --> <h2 class="anchored"> <a name="introduction" href="#introduction">Introduction</a> </h2> <p>Salesforce's ecosystem provides a seamless, integrated platform for our customers. The most recent MuleSoft Anypoint Flex Gateway release is now compatible with Heroku, offering an improved security profile and reduced latency for APIs hosted on Heroku.</p> <p>By deploying the Anypoint Flex Gateway inside the same <a href="https://www.heroku.com/private-spaces">Private Space</a> as your Heroku apps, you create an environment where your Heroku apps with <a href="https://blog.heroku.com/private-spaces-internal-routing">internal routing</a> can be exposed to the public through the Flex Gateway. This adds an extra layer of security and control, only allowing traffic to flow through the Flex Gateway, which can be configured easily from the MuleSoft control plane and scaled with the simplicity of Heroku. The joint integration simplifies operations and scalability and accelerates your time to value for your Salesforce solutions.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934074-heroku-flex-gateway.png" alt="Architecture exposing multiple private APIs on Heroku through Flex Gateway" title="Architecture exposing multiple private APIs on Heroku through Flex Gateway"></p> <h2 class="anchored"> <a name="what-is-anypoint-flex-gateway" href="#what-is-anypoint-flex-gateway">What is Anypoint Flex Gateway?</a> </h2> <p>MuleSoft Anypoint Flex Gateway is a lightweight, ultrafast API Gateway that simplifies the process of building, securing, and managing APIs in the cloud. It removes the burden of API protection, enabling organizations to focus on delivering exceptional digital experiences. Built on the <a href="https://www.mulesoft.com/platform/enterprise-integration">Anypoint Platform</a>, Flex Gateway provides comprehensive API management and governance capabilities for APIs exposed in the cloud.</p> <p>Anypoint Flex Gateway offers robust security features, including authentication, authorization, and encryption, to safeguard sensitive data. It empowers you with granular traffic management, enabling control over API traffic flow and the enforcement of rate limiting policies to maintain service availability. Moreover, Flex Gateway works with API Manager, MuleSoft’s centralized cloud-based API control plane, to deliver valuable analytics and insights into API usage, facilitating data-driven decisions and the optimization of API strategies. Flex Gateway and API Manager are key parts of MuleSoft’s <a href="https://www.mulesoft.com/platform/api-management">universal API Management</a> capabilities to discover, build, govern, protect, manage and engage with any API.</p> <p>In conclusion, MuleSoft Anypoint Flex Gateway is an essential resource for organizations seeking to seamlessly integrate and secure their APIs with Heroku and manage them effectively in a Heroku Private Space. Heroku’s fully managed service, combined with robust security, traffic management, and analytics capabilities, empowers businesses to confidently embrace the cloud and deliver exceptional API experiences to their users.</p> <h2 class="anchored"> <a name="setting-up-flex-gateway-on-heroku" href="#setting-up-flex-gateway-on-heroku">Setting up Flex Gateway on Heroku</a> </h2> <p>To get started with MuleSoft Anypoint Flex Gateway on Heroku, you will need to:</p> <ol> <li>Create a <a href="https://signup.heroku.com/">Heroku account</a> </li> <li>Create an <a href="https://anypoint.mulesoft.com/login/signup">Anypoint Platform account</a> </li> <li>Install the <a href="https://devcenter.heroku.com/articles/heroku-cli">Heroku CLI</a> </li> <li>Install <a href="https://docs.docker.com/get-docker/">Docker</a> to register the Flex Gateway</li> </ol> <p>Upon completing these steps, you are now ready to begin the setup process.</p> <p>The process is described as follows:</p> <ol> <li>Deploy an API in a Heroku <a href="https://www.heroku.com/private-spaces">Private Space</a> </li> <li>Create an API specification in Anypoint Design Center</li> <li>Register the Flex Gateway in Runtime Manager</li> <li>Deploy the Flex Gateway to Heroku</li> <li>Connect the Private API to the Flex Gateway</li> </ol> <p>Now let’s detail each step so you can learn how to implement this pattern for your enterprise applications.</p> <h2 class="anchored"> <a name="deploy-an-api-in-a-heroku-private-space" href="#deploy-an-api-in-a-heroku-private-space">Deploy an API in a Heroku Private Space</a> </h2> <p><em>Note: To learn how to create a Heroku Private Space please refer to the <a href="https://devcenter.heroku.com/articles/private-spaces">documentation</a>, for our example we already have a private space called <code>flex-gateway-west</code>.</em></p> <p>Let's take one of our <a href="https://github.com/heroku-reference-apps/openapi-fastify-jwt">reference applications</a> as our example, which exposes a REST API with OpenAPI support. </p> <p>Before we deploy the app, we must ensure that it is created as an internal application within the private space.</p> <p>You can deploy this internal application using the <strong>Deploy to Heroku</strong> button or the Heroku CLI.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934591-Cursor_and_Create_New_App___Heroku.png" alt="Deploying an internal application to a Private Space using the UI" title="Deploying an internal application to a Private Space using the UI"></p> <p>When using the Heroku CLI make sure you set the <code>--internal-routing</code> flag:</p> <pre><code class="language-bash">heroku create employee-directory-api --space flex-gateway-west --internal-routing </code></pre> <p>Next, you will proceed to configure the application and any add-ons required. In our example, we need to provision a private database (<code>heroku-postgresql:private-0</code>) and set up an RSA public key for JWT authentication support, but these steps might differ for your application. Consult the reference application’s <a href="https://github.com/heroku-reference-apps/openapi-fastify-jwt/blob/main/README.md">README</a> for a more detailed guide.</p> <p>Once you've deployed the app, grab the application URL from the settings page in your Heroku Dashboard. You'll need this for a later step.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934690-employee-directory-api_%C2%B7_Settings___Heroku.png" alt="Internal application URL from Heroku Dashboard" title="Internal application URL from Heroku Dashboard"></p> <h2 class="anchored"> <a name="create-an-api-specification-in-anypoint-design-center" href="#create-an-api-specification-in-anypoint-design-center">Create an API specification in Anypoint Design Center</a> </h2> <p>To link the API with the Flex Gateway, you'll need to create an API specification in Anypoint Platform using the Design Center and then publish it to Anypoint Exchange.</p> <p>If your API running in Heroku Private Space has an API specification that uses the OpenAPI 3.0 standard, which is supported by Anypoint Platform, you can use it here. If you don’t, you can use Design Center to create one from scratch. To learn more, see the <a href="https://docs.mulesoft.com/design-center/design-create-publish-api-specs">API Designer documentation</a>.</p> <p>The User Directory reference application offers both JSON and Yaml API specifications for your convenience. Access them in the <a href="https://github.com/heroku-reference-apps/openapi-fastify-jwt/tree/main/openapi">openapi</a> folder on GitHub.</p> <p>In Design Center, let’s click on Create &gt; Import from file, and select either the Yaml or JSON file, and then click on <strong>Import</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934816-image%20%281%29.png" alt="Import API specification from file"></p> <p>Once you've imported your file, check Design Center to see that your spec file is error-free. You can even use the mocking service to test the API and make sure everything looks good. If there are no problems and it's the right file, go ahead and click on <strong>Publish</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934866-image%20%282%29.png" alt="API specification in Design Center"></p> <p>Add the finishing touches to your metadata, like API version and LifeCycle State, then click on <strong>Publish to Exchange</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934908-image%20%283%29.png" alt="Publishing to Exchange confirmation window"></p> <p>Now, with your API specification in hand, let's move on to registering and deploying the Anypoint Flex Gateway to Heroku.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721934939-image%20%284%29.png" alt="API Published to Exchange"></p> <h2 class="anchored"> <a name="register-the-flex-gateway-in-runtime-manager" href="#register-the-flex-gateway-in-runtime-manager">Register the Flex Gateway in Runtime Manager</a> </h2> <p>Before you deploy to Heroku, you need to get the <code>registration.yaml</code> configuration file. To do that, go to the Runtime Manager &gt; Flex Gateways and click <strong>Add Gateway</strong>. Then select Container &gt; Docker and follow the instructions to set up your gateway locally using Docker. Just follow steps 1 and 2, and that will create the <code>registration.yaml</code> file you need.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935099-Anypoint_Management_Center.png" alt="Flex Gateway registration instructions"></p> <p>Once the command has been executed, you'll see the <code>registration.yaml</code> file. This file is needed on the next step, along with the confirmation of the gateway listed in your Runtime Manager.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935133-image%20%285%29.png" alt="Flex Gateway registered in Runtime Manager"></p> <h2 class="anchored"> <a name="deploy-the-flex-gateway-to-heroku" href="#deploy-the-flex-gateway-to-heroku">Deploy the Flex Gateway to Heroku</a> </h2> <p>Now, let's get the Flex Gateway deployed to Heroku. You can find a reference application for the <a href="https://github.com/heroku-reference-apps/heroku-docker-flex-gateway">Heroku Docker Flex Gateway</a> on GitHub. There, you have two options: use the <strong>Deploy to Heroku</strong> button for a quick and easy deployment, or follow the detailed <strong>Manual Deployment</strong> instructions in the <a href="https://github.com/heroku-reference-apps/heroku-docker-flex-gateway/blob/main/README.md">README</a> using the Heroku CLI. Just ensure you're setting up the Flex Gateway in the same Private Space as the internal API you deployed in earlier steps.</p> <p>For our example, we will use the Heroku CLI, naming our Flex Gateway <code>api-ingress-west</code> and deploying to the <code>flex-gateway-west</code> private space.</p> <pre><code class="language-bash">git clone https://github.com/heroku-reference-apps/heroku-docker-flex-gateway/ cd heroku-docker-flex-gateway heroku create api-ingress-west --space flex-gateway-west heroku config:set FLEX_CONFIG="$(cat registration.yaml)" -a api-ingress-west heroku config:set FLEX_DYNAMIC_PORT_ENABLE=true -a api-ingress-west heroku config:set FLEX_DYNAMIC_PORT_ENVAR=PORT -a api-ingress-west heroku config:set FLEX_DYNAMIC_PORT_VALUE=8081 -a api-ingress-west heroku config:set FLEX_CONNECTION_IDLE_TIMEOUT_SECONDS=60 -a api-ingress-west heroku config:set FLEX_STREAM_IDLE_TIMEOUT_SECONDS=300 -a api-ingress-west heroku config:set FLEX_METRIC_ADDR=tcp://127.0.0.1:2000 -a api-ingress-west heroku config:set FLEX_SERVICE_ENVOY_DRAIN_TIME=30 -a api-ingress-west heroku config:set FLEX_SERVICE_ENVOY_CONCURRENCY=1 -a api-ingress-west heroku stack:set container git push heroku main </code></pre> <p>You’ll see your Heroku apps deployed to the Private Space, after a minute or so you should also see the Flex Gateway as connected in Runtime Manager.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935247-image%20%286%29.png" alt="Flex Gateway and API deployed on Heroku Private Spaces"></p> <p>Make sure to grab the <code>api-ingress-west</code> URL under settings like we did with the API, we will need this URL to test things out.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935287-image%20%287%29.png" alt="Flex Gateway registered on Runtime Manager"></p> <p>And that’s how you deploy the Flex Gateway to Heroku, now let’s connect our internal API and test it.</p> <h2 class="anchored"> <a name="connect-the-private-api-to-the-flex-gateway" href="#connect-the-private-api-to-the-flex-gateway">Connect the Private API to the Flex Gateway</a> </h2> <p>Now, the final step is connecting the Private API with Flex Gateway, for this you will go to Anypoint API Manager and click on <strong>Add API</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935343-image%20%288%29.png" alt="Add an API to the Flex Gateway"></p> <p>Then, select the API from Exchange and click on <strong>Next</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935374-image%20%289%29.png" alt="Select API from Exchange"></p> <p>Let's leave the API Downstream default options as they are and move on to setting up the Upstream. Remember the application URL from our initial step? That URL will serve as our <strong>Upstream URL</strong> (using http and no trailing <code>/</code>).</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935414-image%20%2810%29.png" alt="API Upstream configuration"></p> <p>If everything looks good, go ahead and click on <strong>Save &amp; Deploy</strong>.</p> <p>As the API is not directly accessible due to internal routing, calling it directly will result in a timeout. However, by calling it through the Flex Gateway, you should be able to retrieve the expected response.</p> <p>Let's proceed with a GET request to <code>/directory</code> through the Flex Gateway URL.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935467-zsh.png" alt="HTTP GET request to the user directory API route using curl"></p> <p>Or you can view the User Directory OpenAPI documentation from our reference app directly on a web browser by using the same URL.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935494-Swagger_UI.png" alt="User Directory OpenAPI documentation from a web browser"></p> <p>Congratulations, you've successfully exposed an internal API deployed in Heroku Private Spaces to the outside world through the Anypoint Flex Gateway running on Heroku. Now you can take full advantage of Anypoint API Manager's capabilities, including API-Level policies.</p> <h2 class="anchored"> <a name="securing-your-api-with-anypoint-flex-gateway" href="#securing-your-api-with-anypoint-flex-gateway">Securing your API with Anypoint Flex Gateway</a> </h2> <p>A common pattern for API authentication is using Client ID Enforcement. You can avoid coding your own solution by utilizing the API Manager to apply policies to your API. In this example, we'll implement Client ID enforcement to secure the API.</p> <p>To begin, let's establish an application within Anypoint Platform that will enable us to access the API. Navigate to Exchange, select your API, and in the top right corner, click on <strong>Request access</strong>.</p> <p>Then, pick the API instance where your API is deployed, and select an application to grant access to. If you don’t have one, you can create a new application here and click on <strong>Request access</strong> to obtain the Client and Client Secret credentials.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935573-image%20%2811%29.png" alt="Request access form"></p> <p>Upon your application's approval, you'll receive the Client ID and Client Secret. These credentials will be needed for accessing our newly secured API, so be sure to keep them at hand.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935601-User_Directory_API.png" alt="Request access approval window"></p> <p>Next, navigate to API Manager, choose the API, and click on <strong>Policies</strong> in the left menu. Click on <strong>Add policy</strong>, then select <strong>Client ID Enforcement</strong> and proceed to <strong>Next</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935641-image%20%2812%29.png" alt="Add policy form"></p> <p>Leave the default configuration for the Client ID Enforcement policy and then click on <strong>Apply</strong>.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935672-image%20%2813%29.png" alt="Client ID Enforcement policy configuration"></p> <p>Now that the policy is active, let’s try again a new GET request to the <code>/directory API</code> through the Flex Gateway URL.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935703-Cursor_and_DRAFT__Heroku___MuleSoft_Flex_Gateway_Blog_-_Quip.png" alt="HTTP GET request failing with no client_id"></p> <p>Because we're enforcing the Client ID, we must include it in the request. Let's purposely use an incorrect one to witness the authentication attempt failure.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935768-zsh%20%281%29.png" alt="HTTP GET request authentication failing with incorrect client_id and secret"></p> <p>And finally, let's get the right Client ID and Client Secret in place to test the authentication.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1721935792-zsh%20%282%29.png" alt="HTTP GET request returning the expected values"></p> <p>This is just one simple but powerful example of one of many policies that you can apply on the API Manager.</p> <h2 class="anchored"> <a name="what-s-next" href="#what-s-next">What’s next?</a> </h2> <p>In our next blog post, we'll delve into the various policies you can employ to improve your API with additional authentication, rate limiting, IP allowlist/blocklist measures, and more. We'll also show you how to register your API as an External MuleSoft service in Salesforce, ready to be called from Flow and Apex.</p> <h2 class="anchored"> <a name="strategic-collaboration-for-our-customers" href="#strategic-collaboration-for-our-customers">Strategic Collaboration for our Customers</a> </h2> <p>The Heroku Customer Solutions Architecture (CSA) team, in collaboration with MuleSoft Engineers, played a pivotal role in this Salesforce multi-cloud integration scenario, they listened to customers and got involved in understanding requirements and technical constraints to propose a preliminary proof-of-concept and a series of incremental changes to achieve a perfect match between Heroku and MuleSoft Flex Gateway.</p> <p><a href="https://www.heroku.com/enterprise">Heroku Enterprise</a> customers with Premier or Signature Success Plans can <a href="https://help.heroku.com/enterprise" rel="nofollow">request in-depth guidance</a> on this topic from the CSA team. Learn more about <a href="https://www.heroku.com/support">Expert Coaching Sessions here</a> or contact your Salesforce account executive.</p> <h2 class="anchored"> <a name="learning-resources" href="#learning-resources">Learning Resources</a> </h2> <ul> <li><a href="https://docs.mulesoft.com/gateway/latest/">Anypoint Flex Gateway Overview</a></li> <li><a href="https://docs.mulesoft.com/api-manager/latest/latest-overview-concept">Anypoint API Manager documentation</a></li> <li><a href="https://www.mulesoft.com/lp/demo/api/api-management-series-mulesoft">API Management with MuleSoft Demo series</a></li> <li><a href="https://devcenter.heroku.com/articles/private-spaces">Heroku Private Spaces documentation</a></li> </ul> <h2 class="anchored"> <a name="authors" href="#authors">Authors</a> </h2> <h4 class="anchored"> <a name="julian-duque" href="#julian-duque">Julián Duque</a> </h4> <p>Julián is a Principal Developer Advocate at Heroku, with a strong focus on community, education, Node.js, and JavaScript. He loves sharing knowledge and empowering others to become better developers.</p> <h4 class="anchored"> <a name="parvez-mohamed" href="#parvez-mohamed">Parvez Mohamed</a> </h4> <p>Parvez Syed Mohamed is a seasoned product management leader with over 15 years of experience in Cloud Technologies. Currently, as Director of Product Management at MuleSoft/Salesforce, he drives innovation and growth in API protection.</p> <h4 class="anchored"> <a name="andrea-bernicchia" href="#andrea-bernicchia">Andrea Bernicchia</a> </h4> <p>Andrea Bernicchia is a Senior Customer Solutions Architect at Heroku. He enjoys engaging with Heroku customers to provide solutions for software integrations, architecture patterns, best practices and performance tuning to optimize applications running on Heroku.</p> </description> <author>Julián Duque</author> </item> <item> <title>Heroku CLI v9: Infrastructure Upgrades and oclif Transition</title> <link>https://blog.heroku.com/heroku-cli-v9-infrastructure-upgrades-oclif-transition</link> <pubDate>Wed, 24 Jul 2024 20:55:03 GMT</pubDate> <guid>https://blog.heroku.com/heroku-cli-v9-infrastructure-upgrades-oclif-transition</guid> <description><h2 class="anchored"> <a name="introduction" href="#introduction">Introduction</a> </h2> <p>The <a href="https://devcenter.heroku.com/articles/heroku-cli">Heroku CLI</a> is an incredible tool. It’s simple, extendable, and allows you to interact with all the Heroku functionality you depend on day to day. For this reason, it’s incredibly important for us to keep it up to date. Today, we're excited to highlight a major upgrade with the release of Heroku CLI v9.0.0, designed to streamline contributions, building, and iteration processes through the powerful <a href="https://oclif.io/">oclif platform</a>.</p> <h2 class="anchored"> <a name="what-39-s-new-in-version-9-0-0" href="#what-39-s-new-in-version-9-0-0">What's New in Version 9.0.0?</a> </h2> <p>Version 9.0.0 focuses on architectural improvements. Here's what you need to know:</p> <ul> <li> <strong>oclif Platform</strong>: All core CLI commands are built on the oclif platform. Previously, many commands were built using a pre-oclif legacy architecture.</li> <li> <strong>Unified Package</strong>: All core CLI commands are consolidated into a single package, rather than spread across multiple packages. This consolidation makes tasks like dependency management much easier.</li> <li> <strong>Increased Testing</strong>: We greatly improved the code coverage of our unit and integration tests.</li> <li> <strong>Improved Release Process</strong>: Our release process is much simpler and more automated. We can now easily release pre-release versions of the CLI for testing.</li> <li> <a href="https://devcenter.heroku.com/changelog-items/2925"><strong>Breaking Changes</strong></a>: With the switch to oclif/core, expect changes in output formatting, including additional new lines, whitespace, table formatting, and output colors. Additional flags now require a -- separator, and several commands have updated argument orders or removed flags. We also removed deprecated commands like <code>outbound-rules</code>, <code>pg:repoint</code>, <code>orgs:default</code>, <code>certs:chain</code>, and <code>certs:key</code>. </li> </ul> <p>These changes apply only to the core Heroku CLI commands and don’t affect commands installed separately via plugins. </p> <h2 class="anchored"> <a name="why-we-moved-to-oclif" href="#why-we-moved-to-oclif">Why We Moved to oclif</a> </h2> <p>For the first time, all core CLI commands are built on the oclif platform. By restructuring the core CLI repository, improving our testing and release processes, and adding telemetry, we laid a solid foundation that allows us to innovate and ship features more quickly and confidently than ever before.</p> <p>Heroku pioneered oclif (Open CLI Framework) and it’s now the standard CLI technology used at companies like Salesforce, Twillio, and Shopify. It’s a popular framework for building command-line interfaces, offering a modular structure and robust plugin support. By migrating all core CLI commands to oclif, we unified our command architecture, moving away from the legacy systems that previously fragmented our development process. This transition allows for more consistent command behavior, easier maintenance, and better scalability. oclif’s flexibility and widespread adoption underscore its importance in delivering a more reliable and efficient CLI for our users. </p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>The significant architectural enhancements in CLI version 9.0.0 are a testament to Heroku's commitment to our long-term vision and the exciting developments ahead for our customers. The integration of the oclif platform allows us to deliver a more reliable and efficient CLI, paving the way for future innovations.</p> <p>Ready to experience the upgrade? Update to CLI version 9.0.0 by running <code>heroku update</code>. For more installation options, visit our <a href="https://devcenter.heroku.com/articles/heroku-cli#install-the-heroku-cli">Dev Center</a>. We encourage you to try it and share your <a href="https://github.com/heroku/cli/issues">feedback for enhancing the Heroku CLI</a> and for our full Heroku product via the <a href="https://github.com/heroku/roadmap">Heroku GitHub roadmap</a>.</p> </description> <author>Anush DSouza</author> </item> <item> <title>Using pnpm on Heroku</title> <link>https://blog.heroku.com/using-pnpm-on-heroku</link> <pubDate>Thu, 18 Jul 2024 21:12:00 GMT</pubDate> <guid>https://blog.heroku.com/using-pnpm-on-heroku</guid> <description><h2 class="anchored"> <a name="intro" href="#intro">Intro</a> </h2> <p>The Heroku Node.js buildpack now supports <a href="https://pnpm.io/">pnpm</a>, an alternative dependency manager. Early Node.js application owners who've taken advantage of <a href="https://devcenter.heroku.com/articles/nodejs-support#using-pnpm">pnpm support</a> have seen 10-40% faster install times compared to NPM on Heroku deployments. It’s an excellent choice for managing packages in the Node.js ecosystem because it:</p> <ul> <li> <a href="https://pnpm.io/motivation#saving-disk-space">Minimizes disk space</a> with its content-addressable package store.</li> <li> <a href="https://pnpm.io/motivation#boosting-installation-speed">Speeds up installation</a> by weaving together the resolve, fetch, and linking stages of dependency installation.</li> </ul> <p>This post will introduce you to some of the benefits of the <a href="https://pnpm.io/">pnpm</a> package manager and walk you through creating and deploying a sample application.</p> <!-- more --> <h2 class="anchored"> <a name="prerequisites" href="#prerequisites">Prerequisites</a> </h2> <p>Prerequisites for this include:</p> <ul> <li>A <a href="https://www.heroku.com/platform">Heroku account</a> (<a href="https://signup.heroku.com">signup</a>).</li> <li>A development environment with the following installed: <ul> <li>Git</li> <li>Node.js (v18 or higher)</li> <li>Heroku CLI</li> </ul> </li> </ul> <p>If you don’t have these already, you can follow the <a href="https://devcenter.heroku.com/articles/getting-started-with-nodejs#set-up">Getting Started with Node.js - Setup</a> for installation steps.</p> <h2 class="anchored"> <a name="initialize-a-new-pnpm-project" href="#initialize-a-new-pnpm-project">Initialize a new pnpm project</a> </h2> <p>Let’s start by creating the project folder:</p> <pre><code class="bash">mkdir pnpm-demo cd pnpm-demo </code></pre> <p>Since v16.13, Node.js has been shipping <a href="https://nodejs.org/api/corepack.html">Corepack</a> for managing package managers and is a preferred method for installing either pnpm or Yarn. This is an experimental Node.js feature, so you need to enable it by running:</p> <pre><code class="bash">corepack enable </code></pre> <div class="alert" style="background:rgba(0,107,212,0.1); border:1px solid rgba(0,107,212,0.2); color:#006BD4; padding:1em 2em; margin:30px 0;"> Note: If the corepack command was not found, you may need to <a href="https://github.com/nodejs/corepack?tab=readme-ov-file#manual-installs" style="color:#006BD4; font-weight:700; text-decoration:underline;">install it manually</a>. </div> <p>Now that <a href="https://nodejs.org/api/corepack.html">Corepack</a> is enabled, we can use it to download pnpm and initialize a basic <code>package.json</code> file by running:</p> <pre><code class="bash">corepack pnpm@9 init </code></pre> <p>This will cause Corepack to download the latest <code>9.x</code> version of <a href="https://pnpm.io/">pnpm</a> and execute <code>pnpm init</code>. Next, we should pin the version of <a href="https://pnpm.io/">pnpm</a> in <code>package.json</code> with:</p> <pre><code class="bash">corepack use pnpm@9 </code></pre> <p>This will add a field in <code>package.json</code> that looks similar to the following:</p> <pre><code class="bash">"packageManager": "pnpm@9.0.5+sha256.61bd66913b52012107ec25a6ee4d6a161021ab99e04f6acee3aa50d0e34b4af9" </code></pre> <p>We can see the <code>packageManager</code> field contains:</p> <ul> <li>The package manager to use (<code>pnpm</code>).</li> <li>The version of the package manager (<code>9.0.5</code>).</li> <li>An integrity signature that indicates an algorithm (<code>sha256</code>) and digest (<code>61bd66913b52012107ec25a6ee4d6a161021ab99e04f6acee3aa50d0e34b4af9</code>) that will be used to verify the downloaded package manager.</li> </ul> <p>Pinning the package manager to an exact version is always recommended for deterministic builds.</p> <div class="alert" style="background:rgba(0,107,212,0.1); border:1px solid rgba(0,107,212,0.2); color:#006BD4; padding:1em 2em; margin:30px 0;"> Note: If you don’t want to use <a href="https://nodejs.org/api/corepack.html" style="color:#006BD4; font-weight:700; text-decoration:underline;">Corepack</a>, we also support declaring a pnpm version in the <code style="background:#f5f5f7; color:#3F3F44;">engines</code> field of <code style="background:#f5f5f7; color:#3F3F44;">package.json</code> in the same way we already do with npm and Yarn. See <a href="https://devcenter.heroku.com/articles/nodejs-support#specifying-a-package-manager" style="color:#006BD4; font-weight:700; text-decoration:underline;">Node.js Support - Specifying a Package Manager</a> for more details. </div> <h2 class="anchored"> <a name="create-the-demo-application" href="#create-the-demo-application">Create the demo application</a> </h2> <p>We’ll create a simple <a href="https://expressjs.com/">Express</a> application using the <code>express</code> package. We can use the <a href="https://pnpm.io/cli/add">pnpm add command</a> to do this:</p> <pre><code class="bash">pnpm add express </code></pre> <p>Running the above command will add the following to your <code>package.json</code> file:</p> <pre><code class="json">"dependencies": { "express": "^4.19.2" } </code></pre> <p>It will also install the dependency into the <code>node_modules</code> folder in your project directory and create a lockfile (<code>pnpm-lock.yaml</code>).</p> <p>The <code>pnpm-lock.yaml</code> file is important for several reasons:</p> <ul> <li>Our Node.js Buildpack requires <code>pnpm-lock.yaml</code> to enable <a href="https://pnpm.io/">pnpm</a> support.</li> <li>It enforces consistent installations and packages resolution between different environments.</li> <li>Package resolution can be skipped which enables faster builds.</li> </ul> <p>Now, create an <code>app.js</code> file in your project directory with the following code:</p> <pre><code class="js">const express = require('express') const app = express() const port = process.env.PORT || 3000 app.get('/', (req, res) =&gt; { res.send('Hello pnpm!') }) app.listen(port, () =&gt; { console.log(`pnpm demo app listening on port ${port}`) }) </code></pre> <p>When this file executes, it will start a web server that responds to an HTTP GET request and responds with the message <code>Hello pnpm!</code>.</p> <p>You can verify this works by running <code>node app.js</code> and then opening <a href="http://localhost:3000/" rel="nofollow" target="_blank"></a><a href="http://localhost:3000/">http://localhost:3000/</a> in a browser.</p> <p>So Heroku knows how to start our application, we also need to create a <code>Procfile</code> that contains:</p> <pre><code class="bash">web: node app.js </code></pre> <p>Now we have an application we can deploy to Heroku.</p> <h2 class="anchored"> <a name="deploy-to-heroku" href="#deploy-to-heroku">Deploy to Heroku</a> </h2> <p>Let’s initialize Git in our project directory by running:</p> <pre><code class="bash">git init </code></pre> <p>Create a <code>.gitignore</code> file that contains:</p> <pre><code class="bash">node_modules </code></pre> <p>If we run <code>git status</code> at this point we should see:</p> <pre><code class="bash">On branch main No commits yet Untracked files: (use "git add &lt;file&gt;..." to include in what will be committed) .gitignore Procfile app.js package.json pnpm-lock.yaml nothing added to commit but untracked files present (use "git add" to track) </code></pre> <p>Add and commit these files to git:</p> <pre><code class="bash">git add . git commit -m "pnpm demo application" </code></pre> <p>Then create an application on Heroku:</p> <pre><code class="bash">heroku create </code></pre> <p>Not only will this create a new, empty application on Heroku, it also adds the <code>heroku</code> remote to your Git configuration (for more information see <a href="https://devcenter.heroku.com/articles/git#create-a-heroku-remote">Deploying with Git - Create a Heroku Remote</a>).</p> <p>Finally, we can deploy by pushing our changes to Heroku:</p> <pre><code class="bash">git push heroku main </code></pre> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>Integrating <a href="https://pnpm.io/">pnpm</a> with your Node.js projects on Heroku can lead to more efficient builds and streamlined dependency management, saving time and reducing disk space usage. By following the steps outlined in this post, you can easily set up and start using <a href="https://pnpm.io/">pnpm</a> to enhance your development workflow. Try upgrading your application to <a href="https://pnpm.io/">pnpm</a> and deploy it to Heroku today.</p> </description> <author>Colin Casey</author> </item> <item> <title>Heroku Joins CNCF as a Platinum Member</title> <link>https://blog.heroku.com/heroku-joins-cncf-platinum-member</link> <pubDate>Thu, 27 Jun 2024 18:35:00 GMT</pubDate> <guid>https://blog.heroku.com/heroku-joins-cncf-platinum-member</guid> <description><p>Heroku is <a href="https://www.cncf.io/announcements/2024/06/27/cloud-native-computing-foundation-announces-heroku-joins-as-a-platinum-member/">joining the CNCF at the platinum level</a>, upgrading the long-held CNCF Salesforce membership. This marks my third time serving on the CNCF board for different companies, and I’m excited to participate again. Joining the CNCF at the Platinum level signifies a major commitment, reflecting Heroku’s dedication to the evolving landscape.</p> <p>My three board stints aligns with significant shifts in the cloud-native landscape. Two are behind us, one is happening now, and it’s the current one that motivated us to join now. Quick preview: It’s not the AI shift going on right now - the substrate underlying AI/ML shifted to Kubernetes a while ago.</p> <p>As to why we are joining and why now, let’s take a look at the pivotal shifts that have led us to this point.</p> <h2 class="anchored"> <a name="the-first-shift-kubernetes-launches-the-early-adopter-phase" href="#the-first-shift-kubernetes-launches-the-early-adopter-phase">The First Shift: Kubernetes Launches - The Early Adopter Phase</a> </h2> <p>It’s been a decade since Kubernetes was launched, and even longer since Salesforce acquired Heroku. Ten years ago, Heroku was primarily used by startups and smaller companies, and Kubernetes 1.0 had just launched (yes, I was on stage for that! <a href="https://www.youtube.com/watch?v=fBa79csw9-M">Watch the video</a> for a blast from the past). Google Kubernetes Engine (GKE) had launched, but no other cloud services had yet offered a managed Kubernetes solution. I was the Cloud Native CTO at Samsung, and we made an early bet on Kubernetes as transformative to the way we deployed and managed applications both on cloud and on-premises. This was the early adopter phase.</p> <p>Heroku was one of the early influences on Kubernetes, particularly in terms of <a href="https://www.heroku.com/dx">developer experience</a>, most notably with The Twelve-Factor App (<a href="http://www.12factor.net">12-Factor App</a>), which influenced “cloud native” thinking. My presentations from the Kubernetes 1.0 era have Heroku mentions all over them, and it was no surprise to see Heroku highlighted in <a href="https://www.youtube.com/live/jYjEWlnY25M?si=UUdtNcBUcUfdODnE&amp;t=4222">Eric Brewer’s great talk</a> at the KuberTENes 10th anniversary event. Given Heroku’s legendary focus on user experience, one might wonder why the Kubernetes developer experience turned out the way it did. More on this later, but Kubernetes was built primarily to address the most critical yet painful and error-prone part of the software lifecycle, and the one most people were spending the majority of their time on — operations. In this regard, it is an incredible success. Kubernetes also represented the first broad-based shift to declarative intent as an operational practice, encapsulated by Alexis Richardson as “gitops.” Heroku has a similar legacy: “git push heroku master.” Heroku was doing gitops before it had a name.</p> <h2 class="anchored"> <a name="the-second-shift-kubernetes-goes-big" href="#the-second-shift-kubernetes-goes-big">The Second Shift: Kubernetes Goes Big</a> </h2> <p>EKS launched six years ago and quickly became the largest Kubernetes managed service, with large companies across all industries adopting it. AWS was the last of the big three to launch a Kubernetes managed service, and this validated that Kubernetes had grown massively and most companies were adopting it as the standard. During this era, Kubernetes was deployed at scale as the primary production system for many companies or the primary production system for new software. Notably, Kubeflow was adopted broadly for ML use cases — Kubernetes was becoming the standard for AI/ML workloads. This continues to this day with generative AI.</p> <p>During this time, Heroku also matured. Although the credit-card-based Heroku offering remained popular for new startups and citizen developers, the Heroku business shifted rapidly towards the <a href="https://www.heroku.com/enterprise">enterprise offering</a>, which is now the majority of the business. Although many think of Heroku as primarily a platform for startups, this hasn’t been the case for many years.</p> <p>Salesforce was one of the companies that adopted Kubernetes at a huge scale with <a href="https://help.salesforce.com/s/articleView?id=000388902&amp;type=1">Hyperforce</a>. The successes of this era (including Hyperforce) were characterized by highly skilled platform teams, often with contributors to Kubernetes or adjacent projects. This demonstrates the value of cloud-native approaches to a company — the significant cost of managing the complexity of Kubernetes and the adjacent systems (including OpenTelemetry, Prometheus, OCI, Docker, Argo, Helm… the CNCF landscape now has over 200 projects) is worth the investment.</p> <p>However, the large investment in technical expertise is a barrier to even wider adoption beyond the smaller number of more sophisticated enterprises. To be clear, I’m not talking about using EKS, AKS, or GKE—that’s a given. These services are far more cost-effective at running Kubernetes safely and at scale than most enterprises could ever be, thanks to cost efficiencies at scale.</p> <h2 class="anchored"> <a name="the-third-shift-is-afoot-kubernetes-goes-really-wide" href="#the-third-shift-is-afoot-kubernetes-goes-really-wide">The Third Shift is Afoot: Kubernetes Goes Really Wide</a> </h2> <p>Kubernetes is awesome but complex, and we are seeing the next wave of adopters start to adopt Kubernetes. This wave needs an approach to Kubernetes that provides the benefits without the huge investment. This is why we have shifted the Heroku strategy to be based on Kubernetes going forward. You can hear this announcement during my keynote at KubeCon Paris: <a href="https://www.youtube.com/watch?v=zCuc5VnVrJI">Watch the keynote</a>. We are committed to bringing our customers Kubernetes’ benefits on the inside, without the complexity, wrapped in Heroku’s signature simplicity.</p> <h2 class="anchored"> <a name="summary-how-should-we-all-think-about-kubernetes" href="#summary-how-should-we-all-think-about-kubernetes">Summary: How Should We All Think about Kubernetes?</a> </h2> <p>We view Kubernetes, to quote <a href="https://www.linuxfoundation.org/about/leadership">Jim Zemlin</a>, as the “Linux of the Cloud.” Linux is a single-machine operating system, whereas Kubernetes is the distributed operating system layered on top. Today, Kubernetes is more like the Linux kernel, rather than a full distribution. Various Linux vendors collaborate on a common kernel and differentiate in user space. We view Heroku’s product and contribution to Kubernetes as following that model. We will work with the community on the common unforked Kubernetes but will build great things on top, including Heroku as you know it today.</p> <p><a href="https://blog.heroku.com/heroku-cloud-native-buildpacks"><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1719511417-heroku-cnb-blog.png" alt="heroku-cnb-blog"></a></p> <h2 class="anchored"> <a name="final-thoughts" href="#final-thoughts">Final Thoughts</a> </h2> <p>Heroku's commitment to joining the CNCF at the platinum level underscores our dedication to the evolving cloud-native landscape. There’s still more progress to be made for developers &amp; operators alike. That’s why we’re invested in <a href="https://buildpacks.io/">Cloud Native Buildpacks</a>. It lets companies standardize how they build application container images. People can hit the ground running with our recently open sourced <a href="https://blog.heroku.com/heroku-cloud-native-buildpacks">Heroku Cloud Native Buildpacks</a>. As Kubernetes and the other constellation of projects around it continue to expand, we are excited to participate, ensuring our customers benefit from its capabilities while maintaining the simplicity and user experience that Heroku is known for.</p> </description> <author>Bob Wise</author> </item> <item> <title>Optimizing Data Reliability: Heroku Connect & Drift Detection</title> <link>https://blog.heroku.com/optimizing-data-reliability-heroku-connect-drift-detection</link> <pubDate>Tue, 25 Jun 2024 21:43:00 GMT</pubDate> <guid>https://blog.heroku.com/optimizing-data-reliability-heroku-connect-drift-detection</guid> <description><p><a href="https://www.heroku.com/connect">Heroku Connect</a> makes it easy to sync data at scale between Salesforce and <a href="https://www.heroku.com/postgres">Heroku Postgres</a>. You can build Heroku apps that bidirectionally share data in your Postgres database with your contacts, accounts, and other custom objects in Salesforce. Easily configured with a point-and-click UI, you can get the integration up and running in minutes without writing code or worrying about API limits. In this post, we introduce our recent improvements to <a href="https://devcenter.heroku.com/articles/heroku-connect">Heroku Connect</a> on how we handle drift and drift detection for our customers.</p> <p><a href="https://www.heroku.com/customers/pensionbee">PensionBee</a>, the U.K.-based company, is on a mission to make pensions simple and engaging by building a digital-first pension service on Heroku. PensionBee’s consumer-friendly web and mobile apps deliver sophisticated digital experiences that give people better visibility and control over their retirement savings.</p> <p>PensionBee’s service relies on a smooth flow of data between the customer-facing app on Heroku and Salesforce on the backend. Both customers and employees need to view and access the most current account data in real time. Heroku Connect ensures all of PensionBee’s systems stay in sync to provide the best end-user experience.</p> <p><a href="https://www.heroku.com/customers/pensionbee"><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1719261257-PensionBee-CTA.png" alt="Read the PensionBee Case Study" title="Modernizing the Pension Industry with a Digital-First Business on Heroku"></a></p> <h2 class="anchored"> <a name="understanding-data-drift" href="#understanding-data-drift">Understanding Data Drift</a> </h2> <p>Heroku Connect <a href="https://devcenter.heroku.com/articles/reading-data-from-salesforce-with-heroku-connect">reads data from Salesforce</a> and updates Postgres by polling for changes in your Salesforce org within a time window. The initial poll done to bring in changes from Salesforce to Postgres is called a “primary poll”. As the data syncs to Postgres, the polling window moves to capture the next set of changes from Salesforce. The primary poll syncs almost all changes, but it's possible to miss some changes that lead to “drift”. </p> <p>Heroku Connect does the hard work of monitoring for “drift” for you and ensures the data eventually becomes consistent. We have now increased the efficiency of this feature to recognize and address drift detection even faster on your behalf. As before, this process is transparent to you; however, we thought our customers might enjoy understanding a bit more about what is going on behind the scenes.</p> <p>There are several complications in ensuring that the data sync between the two systems is performant while being reliable. One complication is when Heroku Connect polls a Salesforce object for changes, and a long-running automation associated with record updates doesn’t commit data at that time. When those transactions are committed, the polling window could have already moved on to capture the next set of changes in Salesforce. Those missed long-running transactions result in drift. Heroku Connect handles those missed changes seamlessly for its customers.</p> <h2 class="anchored"> <a name="drift-detection-ensuring-data-accuracy-and-consistency" href="#drift-detection-ensuring-data-accuracy-and-consistency">Drift Detection: Ensuring Data Accuracy and Consistency</a> </h2> <p>Heroku Connect tracks poll windows for each <a href="https://devcenter.heroku.com/articles/managing-heroku-connect-mappings">mapping</a> while retrying any failed polls. Drift detection uses a “secondary poll” to catch and fix any changes the primary poll missed. Heroku Connect tracks the poll bounds of the primary poll and schedules a secondary poll for the same poll bounds after some time. Depending on the size of the dataset the primary poll is synchronizing, Heroku Connect uses either the <a href="https://devcenter.heroku.com/articles/reading-data-from-salesforce-with-heroku-connect#bulk-api">Bulk API</a> or <a href="https://devcenter.heroku.com/articles/reading-data-from-salesforce-with-heroku-connect#soap-api">SOAP API</a> for polling. Heroku Connect leverages Salesforce APIs without impacting your API usage limits and license.</p> <p>With the Bulk API, Heroku Connect creates a bulk job and adds <a href="https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_batches_intro.htm">bulk batches</a> to the bulk job during the primary poll. Heroku Connect tracks the poll bounds for each bulk batch, and then performs a secondary poll corresponding to the poll bounds for each bulk batch in the primary poll. During the secondary poll, Heroku Connect creates a bulk job for each bulk batch processed by the primary poll. Sync using Heroku Connect is asynchronous with retries, so it isn’t real-time, though it appears to be.</p> <h2 class="anchored"> <a name="scale-and-performance-improvements" href="#scale-and-performance-improvements">Scale and Performance Improvements</a> </h2> <p>As Heroku Connect serves more customers with increasingly large mappings, we continue to ensure we provide a scalable, reliable, and performant solution for our customers. One of the areas where we made significant improvements is the way we manage and schedule secondary polls for drift detection, especially for polls that use the Bulk API.</p> <h3 class="anchored"> <a name="reduced-load-on-the-salesforce-org" href="#reduced-load-on-the-salesforce-org">Reduced load on the Salesforce org</a> </h3> <p>In the old process, the secondary poll created a large number of bulk jobs in Salesforce. Now the secondary poll only creates a single bulk job for each bulk job created by the primary poll. Then, for each bulk batch processed by the primary poll, a bulk batch is added to the secondary poll’s bulk job.</p> <h3 class="anchored"> <a name="optimized-management-of-the-secondary-poll" href="#optimized-management-of-the-secondary-poll">Optimized management of the secondary poll</a> </h3> <p>Previously, there was no limit on the number of bulk tasks processed by the secondary poll at a time. As primary bulk batches completed, any number of secondary bulk tasks were scheduled and executed simultaneously. Now Heroku Connect schedules and executes secondary polls so that there’s limited bulk activity at a time. This helps with:</p> <ul> <li> <strong>Improved availability of database connections:</strong> Heroku Connect opens database connections as it updates data in Postgres from Salesforce. With an unlimited number of simultaneous secondary poll tasks, Heroku Connect opens a large number of database connections, leaving fewer connections for your applications accessing the same database. By limiting secondary poll tasks and scheduling them in a controlled way, Heroku Connect uses a much smaller number of database connections at any given time, allowing your applications enough connections to work with.</li> <li> <strong>Improved operational reliability:</strong> Our optimizations in scheduling secondary polls enhance the overall performance, ensuring that even during heavy sync activities, the quality of service remains high for all users sharing the underlying infrastructure.</li> </ul> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>At Heroku, we take the trust, reliability, and availability of our platform seriously. By investing in projects such as improving drift detection, we’re constantly working to improve the resilience of our systems and provide the best possible experience so our customers like PensionBee can continue to rely on Heroku Connect to keep their data in sync. Thank you for choosing Heroku!</p> <p>If you have any thoughts or suggestions on future reliability improvements we can make, check out <a href="https://github.com/heroku/roadmap">our public roadmap</a> on GitHub and submit an issue!</p> <h2 class="anchored"> <a name="about-the-authors" href="#about-the-authors">About the Authors</a> </h2> <p>Siraj Ghaffar is a Lead Engineer for Heroku Connect at Salesforce. He has broad experience in distributed, scaleable, and reliable systems. You can follow him on <a href="https://www.linkedin.com/in/siraj-ghaffar">LinkedIn</a>.</p> <p>Vivek Viswanathan is a Director of Product Management for Heroku Connect at Salesforce. He has more than a decade of experience with the Salesforce ecosystem, and his primary focus for the past few years has been scalable architecture and Heroku. You can follow him on <a href="https://www.linkedin.com/in/vivekviswanathan/">LinkedIn</a>.</p> </description> <author>Siraj Ghaffar</author> </item> <item> <title>Introducing New Heroku Postgres Essential Plans Built On Amazon Aurora</title> <link>https://blog.heroku.com/heroku-postgres-essential-launch</link> <pubDate>Tue, 21 May 2024 17:05:00 GMT</pubDate> <guid>https://blog.heroku.com/heroku-postgres-essential-launch</guid> <description><p>We’re thrilled to <a href="https://devcenter.heroku.com/changelog-items/2877">launch our new Heroku Postgres Essential database plans</a>. These plans have <code>pgvector</code> support, no row count limits, and come with a 32 GB option. We deliver exceptional transactional query performance with Amazon Aurora as the backing infrastructure. One of our beta customers said:</p> <blockquote> <p><strong>“The difference was noticeable right from the start. Heroku Postgres running on Aurora delivered a boost in speed, allowing us to query and process our data faster.”</strong></p> </blockquote> <p>Our Heroku Postgres Essential plans are the quickest, easiest, and most economical way to integrate a SQL database with your Heroku application. You can use these fully managed databases for a wide range of applications, such as small-scale production apps, research and development, <a href="https://www.heroku.com/students">educational purposes</a>, and prototyping. These plans offer full PostgreSQL compatibility, allowing you to use existing skills and tools effortlessly.</p> <p><img src="https://heroku-blog-files.s3.amazonaws.com/posts/1716240158-Heroku%20Postgres%20%2B%20Amazon%20Aurora_Image%201.png" alt="Heroku Postgres Partnership With Amazon Aurora"></p> <p>Compared to the previous generation of Mini and Basic database plans, the Essential plans on the new infrastructure provides up to three times the query throughput performance and additional improvements such as removing the historic row count limit. The table highlights what each of the new plans include in more detail.</p> <table> <thead> <tr> <th>Product</th> <th>Storage</th> <th>Max Connection</th> <th>Max Row Count</th> <th>Max Table Count</th> <th>Postgres Versions</th> <th>Monthly Pricing</th> </tr> </thead> <tbody> <tr> <td>Essential-0</td> <td>1 GB</td> <td>20</td> <td>No limit</td> <td>4,000</td> <td>14, 15, 16</td> <td>$5</td> </tr> <tr> <td>Essential-1</td> <td>10 GB</td> <td>20</td> <td>No limit</td> <td>4,000</td> <td>14, 15, 16</td> <td>$9</td> </tr> <tr> <td>Essential-2</td> <td>32 GB</td> <td>40</td> <td>No limit</td> <td>4,000</td> <td>14, 15, 16</td> <td>$20</td> </tr> </tbody> </table> <h2 class="anchored"> <a name="our-commitment-to-the-developer-experience" href="#our-commitment-to-the-developer-experience">Our Commitment to the Developer Experience</a> </h2> <p>At Heroku, we deliver a world-class developer experience that’s reflected in our new Essential database plans. Starting at just $5 per month, we provide a fully managed database service built on Amazon Aurora. With these plans, developers are assured they’re using the latest technology from AWS and they can focus on what’s most important—innovating and building applications—without the hassle of database management. </p> <p>We enabled <a href="https://devcenter.heroku.com/articles/upgrading-heroku-postgres-databases#upgrading-with-pg-upgrade"><code>pg:upgrade</code></a> for easier upgrades to major versions and removed the row count limit for increased flexibility and scalability for your projects. We also included support for the <a href="https://blog.heroku.com/pgvector-launch"><code>pgvector</code> extension</a>, bringing vector similarity search to the <a href="https://elements.heroku.com/addons/heroku-postgresql#pricing">entire suite of Heroku Postgres plans</a>. <code>pgvector</code> enables exciting possibilities in AI and natural language processing applications across all of your development environments.</p> <p>You can create a Heroku Postgres Essential database with:</p> <pre><code>$ heroku addons:create heroku-postgresql:essential-0 -a example-app </code></pre> <h2 class="anchored"> <a name="migrating-mini-and-basic-postgres-plans" href="#migrating-mini-and-basic-postgres-plans">Migrating Mini and Basic Postgres Plans</a> </h2> <p>If you already have Mini or Basic database plans, we’ll <a href="https://devcenter.heroku.com/articles/heroku-postgres-plans#mini-and-basic-deprecation-details">automatically migrate</a> them to the new Essential plans. We’re migrating Mini plans to Essential-0 and Basic plans to Essential-1. We’re making this process as painless as possible with minimal downtime for most databases. Our automatic migration process begins on May 29, 2024, when the Mini and Basic plans reach end-of-life and are succeeded by the new Essential plans. See our <a href="https://devcenter.heroku.com/articles/postgres-essential-tier">documentation for migration details</a>. </p> <p>You can also proactively migrate your Mini or Basic plan to any of the new Essential plans, including the Essential-2 plan, using <code>addons:upgrade</code>:</p> <pre><code>$ heroku addons:upgrade DATABASE heroku-postgresql:essential-0 -a example-app </code></pre> <h2 class="anchored"> <a name="exploring-the-use-cases-of-the-essential-plans" href="#exploring-the-use-cases-of-the-essential-plans">Exploring the Use Cases of the Essential Plans</a> </h2> <p>With the enhancements of removing row limits, adding <code>pgvector</code> support, and more, Heroku Postgres Essential databases are a great choice for customers of any size with these use cases.</p> <ul> <li> <strong>Development and Testing</strong>: Ideal for developers looking for a cost-effective, fully managed Postgres database. You can develop and test your applications in an environment that closely mimics production, ensuring everything runs smoothly before going live.</li> <li> <strong>Prototype Projects</strong>: In the prototyping phase, the ability to adapt quickly based on user feedback or test results is crucial. With Essential plans, you get the flexibility and affordability needed to iterate fast and effectively during this critical stage.</li> <li> <strong>Educational Projects and Tutorials</strong>: Ideal for educational setups that require access to live cloud database environments. They're perfect for hands-on learning, from running SQL queries to exploring cloud application management and operations, without managing the complex infrastructure.</li> <li> <strong>Low Traffic Web Apps</strong>: Ideal for experimental or low traffic applications such as small blog sites or forums. Essential plans provide the necessary reliability and performance, including daily backups and scalability options as your user engagement grows. </li> <li> <strong>Startups</strong>: The Essential plans offer a fully managed and scalable database solution, important for startup businesses to grow without initial heavy investments. It can help speed up time-to-market and reach customers faster.</li> <li> <strong>Salesforce Integration Trial</strong>: The best method to synchronize Salesforce data and <a href="https://www.heroku.com/postgres">Heroku Postgres</a> is with <a href="https://www.heroku.com/connect">Heroku Connect</a>. The <a href="https://devcenter.heroku.com/articles/heroku-connect#available-plans"><code>demo</code> plan</a> works with Essential database plans. Although the demo plan isn’t suitable for production use cases, it provides a way to explore how Heroku Connect can amplify your Salesforce investment.</li> <li> <strong>Incorporating pgvector</strong>: Essential database plans support <a href="https://devcenter.heroku.com/articles/pgvector-heroku-postgres"><code>pgvector</code></a>, an open-source extension for Postgres designed for efficient vector search capabilities. This feature is invaluable for applications requiring high-performance similarity searches, such as recommendation systems, content discovery platforms, and image retrieval systems. Use <code>pgvector</code> on Essential plans to build advanced search functionalities such as AI-enabled applications and Retrieval Augmented Generation (RAG).</li> </ul> <h2 class="anchored"> <a name="looking-forward" href="#looking-forward">Looking Forward</a> </h2> <p>As <a href="https://youtu.be/fZLcv7rwj7Y?si=13K95oX2oEVQ_T9p&amp;t=1945">announced</a> at re:Invent 2023, we’re collaborating with the Amazon Aurora team on the next-generation Heroku Postgres infrastructure. This partnership combines the simplicity and user experience of Heroku with the robust performance, scalability, and flexibility of Amazon Aurora. The launch of Essential database plans marks the beginning of a broader rollout that will soon include a fleet of single-tenant databases.</p> <p>Our new Heroku Postgres plans will decouple storage and compute, allowing you to scale storage up to 128 TB. They’ll also add more database connections and more Postgres extensions, offer near-zero-downtime maintenance and upgrades, and much more. The future architecture will ensure fast and consistent response times by distributing data across multiple availability zones with robust data replication and continuous backups. Additionally, the <a href="https://www.heroku.com/shield">Shield</a> option will continue to meet compliance needs with regulations like HIPAA and PCI, ensuring secure data management.</p> <h2 class="anchored"> <a name="conclusion" href="#conclusion">Conclusion</a> </h2> <p>Our Heroku Postgres databases built on Amazon Aurora represent a powerful solution for customers seeking to enhance their database capabilities with a blend of performance, reliability, cost-efficiency, and Heroku’s simplicity. Whether you're scaling a high web traffic application or managing large-scale batch processes, <a href="https://www.salesforce.com/news/press-releases/2023/11/27/aws-data-ai-strategic-partnership-expansion/?_gl=1*eynx4n*_ga*MTcwNTczMTI0Ny4xNjY5NzUzMDAx*_ga_62RHPFWB9M*MTcxNTk3Njg0Ni40MzkuMC4xNzE1OTc2ODQ2LjAuMC4w">our partnership with AWS</a> accelerates the delivery of Postgres innovations to our customers. Eager to be part of this journey? Join the <a href="https://docs.google.com/forms/d/e/1FAIpQLSeDKfTK-mH2uW5b-3T3RS-vNrrnRVsKrx53BvhqYo8uSHH3yA/viewform">waitlist for the single-tenant database pilot program</a>.</p> <p>We want to extend our gratitude to the community for the feedback and helping us build products like <a href="https://github.com/heroku/roadmap/issues/292">Essential Plans</a>. Stay connected and share your thoughts on our <a href="https://github.com/heroku/roadmap">GitHub roadmap page</a>. If you have questions or require assistance, our dedicated <a href="https://help.heroku.com/">Support team</a> is available to assist you on your journey into this exciting new frontier. </p> </description> <author>Jonathan Brown</author> </item> </channel> </rss>