-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy pathindex.json
1 lines (1 loc) · 868 KB
/
index.json
1
[{"section":"","url":"https://microcks.io/author/","title":"Author","description":"Read latest blog posts by Author","searchKeyword":"","content":""},{"section":"Author","url":"https://microcks.io/author/alain-pham/","title":"Alain Pham","description":"","searchKeyword":"","content":"Principal Solutions Engineer at Grafana Labs\n"},{"section":"Author","url":"https://microcks.io/author/carol-gschwend/","title":"Carol Gschwend","description":"","searchKeyword":"","content":"Senior Software Engineer at J.B. Hunt\n"},{"section":"Author","url":"https://microcks.io/author/diane-mueller/","title":"Diane Mueller","description":"","searchKeyword":"","content":"Managing Director, Research and Advisory Services at Bitergia\n"},{"section":"Author","url":"https://microcks.io/author/hugo-guerrero/","title":"Hugo Guerrero","description":"","searchKeyword":"","content":"Chief Software Architect, APIs \u0026amp; Integration Developer Advocate at Red Hat\n"},{"section":"Author","url":"https://microcks.io/author/laurent-broudoux/","title":"Laurent Broudoux","description":"","searchKeyword":"","content":"Co-founder of Microcks | Director of Engineering at Postman Open Technologies\n"},{"section":"Author","url":"https://microcks.io/author/ludovic-pourrat/","title":"Ludovic Pourrat","description":"","searchKeyword":"","content":"API Architect | Platform Architect at Lombard Odier\n"},{"section":"Author","url":"https://microcks.io/author/nicolas-masse/","title":"Nicolas Masse","description":"","searchKeyword":"","content":"Principal Solution Architect at Red Hat\n"},{"section":"Author","url":"https://microcks.io/author/nikolay-afanasyev/","title":"Nikolay Afanasyev","description":"","searchKeyword":"","content":"Lead Developer\n"},{"section":"Author","url":"https://microcks.io/author/sebastien-fraigneau/","title":"Sebastien Fraigneau","description":"","searchKeyword":"","content":"Senior Software Engineer at CNAM\n"},{"section":"Author","url":"https://microcks.io/author/yacine-kheddache/","title":"Yacine Kheddache","description":"","searchKeyword":"","content":"Co-founder of Microcks | Director of Product Strategy \u0026amp; Innovation at Postman Open Technologies\n"},{"section":"","url":"https://microcks.io/blog/","title":"Latest News","description":"Microcks latest blog posts and news","searchKeyword":"","content":""},{"section":"Blog","url":"https://microcks.io/blog/testcontainers-modules-0.3/","title":"Announcing Testcontainers Modules 0.3","description":"Announcing Testcontainers Modules 0.3","searchKeyword":"","content":"To start the 2025 Year fresh, we\u0026rsquo;re delighted to announce the release of the new series of our Testcontainers Modules 🧊! Microcks modules are language-specific libraries that enable embedding Microcks into your unit tests with lightweight, throwaway instances thanks to containers.\nThe 0.3 series is a major step forward that completes the set of features and elevates Microcks as a fully featured mocking library for development purposes. It can be used with different testing styles (classicist, mockist, state-based, and interaction-based) and provides features for all major languages and all kinds of API!\nPhoto by Alexandre Boucey on Unsplash The 0.3 versions are coincident releases made last week on the three main languages we currently support: Java ☕️, NodeJS/Typescript and Golang. With those new releases, we now have complete feature parity among the three different technology stacks! We plan to add the same features to the upcoming .NET module that we added to our portfolio in December 2024.\nLet\u0026rsquo;s introduce the new features of those releases that complete the picture!\nWhat\u0026rsquo;s inside? 1️⃣ Interaction checks If you spend some time reading about Test Driven Development, you may know that it exists two main school of thought: the Classicist and the Mockist. (more on this in this excellent article)\nUntil now, Microcks Testcontainers integration was Classicist in the sense that it was focused on providing canned responses when the mocks endpoints were called. With this 0.3 release, we now also have a Mockist approach in the case you want to check the interactions of your component under test with a dependant API.\nYou can now call verify() or getServiceInvocationsCount() on a Microcks container to check whether an API dependency has actually been called - or not. This is a super powerful way to ensure that your application logic (when to interact with an API) and access policies (caching, no caching, etc.) are correctly implemented and use the mock endpoints only when required!\nA big shout out to pierrechirstinimsa 🙏 who contributed this one for our Java module 🎉\n2️⃣ Access to messages in synchronous API contract-testing The second thing we added in this release is the ability to retrieve exchanged messages when doing contract-testing. Different contract-testing runners already exists in Microcks but they\u0026rsquo;re mainly focused on syntactical conformance checking. However, we know that there are multiple levels of contract testing.\nThe new getMessagesForTestCase() function on Microcks container allows you to retrieve the requests and responses that were used during an operation tests case. You can then use them to perform extra checks more related to business conformance; validating, for example, that data retrieval or transformation logic is correctly implemented.\nThese extra checks can be done manually directly in the programming language of your tests but you can also delegate them to frameworks like Cucumber which are written in plain language by business experts; for running acceptance tests!\n3️⃣ Access to events in asynchronous EDA contract-testing Finally, we also added a way to do the same thing for asynchronous events received during a contract test on an asynchronous broker.\nUntil now, Microcks ASYNC_API_SCHEMA tests allowed you to check that events were fired, correctly sent, read back from - let say - an Apache Kafka topic and valid regarding a schema. But what about the content of this event? Was it really related to the business function call that fired it?\nThe new getEventMessagesForTestCase() function allow you to retrieve those events that were read from the broker topic and - here again - perform extra checks. Typically, you can validate that the data from the event is correctly correlated to the original data that triggered the event emission.\n💡 In the the case of EDA, those checks are also tightly related to interaction checks; and this is a situation we faced during the development of the supporting demo application! Once we added them, we realized that our Kafka producer has a flushing issue and that we received more messages than we expected! Due to a too low timeout configuration, events were not sent at the right time, introducing de-synchronization and collisions! 💥\nEnthusiastic? We hope this walkthrough has made you enthusiastic about this new set of features in Microcks Testcontainers 0.3! The best thing is that you don\u0026rsquo;t have to wait another Microcks release to test them out as they\u0026rsquo;re are leveraging APIs and features that are present for a long time in Microcks core!\nIf you want to learn them and see them in action, we have completed our demonstration application and tutorials on Testcontainers Modules as well! You just have to check the following links:\nFor Java ☕️: How to check the mock endpoints are actually used, How to verify the business conformance of a synchronous API, and How to verify the event content for an asynchronous API\nFor NodeJS/TypeScript: How to check the mock endpoints are actually used, How to verify the business conformance of a synchronous API, and How to verify the event content for an asynchronous API\nFor Golang: How to check the mock endpoints are actually used, How to verify the business conformance of a synchronous API, and How to verify the event content for an asynchronous API\nAs usual, we’re eager for community feedback: come and discuss on our Discord chat 👻\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/recap-of-an-amazing-2024/","title":"Recap of an Amazing 2024, and ready to go for 2025!","description":"Recap of an Amazing 2024, and ready to go for 2025!","searchKeyword":"","content":"As we wrap up 2024, we at Microcks want to express our gratitude to our adopters, sponsors, partners, and community members. Your unwavering support and engagement have been the foundation of our success, making this year nothing short of remarkable. We\u0026rsquo;ve achieved significant milestones such as more community implication, stronger ecosystem collaboration, and significant communication contributions and amplification from our members.\nThis year has been transformative, marked by growth, innovation, and a sense of community that fuels our journey. Here\u0026rsquo;s a look at some key highlights as we enter 2025.\n🚀 Microcks in Numbers 2024 has been a year of outstanding achievements:\nOver 140k downloads for December 2024, consistently growing every month and 600k+ downloads in the last year! A surge in community engagement with 2600+ LinkedIn followers, gaining over 1100 followers in 2024. We have almost 1500 GitHub stars on our main repository, a growth of +500 this year, reflecting the increasing interest in Microcks. We welcomed 21 public adopters (+10 this year!) and 28 private adopters we know of. We encourage private adopters to join our public list to showcase their use of Microcks. Expanded globally with new users across LATAM and APAC regions, emphasizing our growing international footprint, as highlighted by the Google Analytics data below for the microcks.io website over the past year. 🙌 Community Contributions: The Heart of Microcks Our community constantly amazes us with its contributions, driving innovation and expanding Microcks\u0026rsquo; possibilities. Adopters and users actively contributed ideas and implementations, enabling new capabilities for the Microcks platform.\nA special shoutout to AXA France and Sebastien Degodez, who has become the maintainer of the Testcontainers .Net library, expanding our ecosystem to our shift-left approach for Microsoft developers 👉 https://testcontainers.com/modules/microcks/\nContributor Growth: 2024 saw significant contributors\u0026rsquo; growth, demonstrating our community\u0026rsquo;s enthusiasm and commitment. According to devstats, Microcks had 60 contributors in 2024. These include committers from 25 organizations such as Google, Red Hat, Catena Clearing, Vinted, Adeo, Bancolumbia, OPT New Caledonia and so on.\n👉 Explore more stats here.\n🌟 Ecosystem Collaborations: Stronger Together 2024 was a banner year for partnerships and collaborations that pushed the boundaries of what\u0026rsquo;s possible.\nKubeCon + CloudNativeCon NA At Salt Lake City, we announced an exciting new partnership with Traefik Labs. We deliver Sandbox as a service with a fully GitOps-automated approach to Kubernetes. This collaboration showcases the seamless integration of Traefik\u0026rsquo;s API Gateway with Microcks\u0026rsquo; API mocking and testing capabilities 👉 Read more here.\nAppDeveloperCon synergies with the Quarkus community during day 0 of KubeCon + CloudNativeCon NA. See the recording our the talk done with Daniel Oh (Java Champion, CNCF Ambassador, Developer Advocate at Red Hat) 👉 “Streamlining Cloud-Native Development: Simplifying Dependencies and Testing with Microcks”.\nBump.sh Partnership We teamed up with Bump.sh, which revolutionized how developers work with API specifications. Combining Bump\u0026rsquo;s sleek documentation tools with Microcks\u0026rsquo; robust mocking and testing capabilities offers a well-connected, efficient workflow 👉 Check out the details.\nObservability with Grafana Labs Working with Alain Pham from Grafana Labs, we introduced observability at scale for Microcks using OpenTelemetry, showcasing how modern monitoring tools can empower API testing workflows 👉Blog post here.\nCollaboration with API Evangelist Partnering with Kin Lane, the API Evangelist, we explored a new schema for defining API examples. This initiative enhances the specification of reusable examples, fostering interoperability and opening doors for broader collaboration in the open source ecosystem 👉 Learn more about this specification.\n🗣️ Communication Contributions Adopters and partners have played a crucial role in spreading the word about Microcks through talks, events, and blog posts.\nNotable highlights include:\nLudovic Pourrat from Lombard Odier sharing their journey in \u0026ldquo;Revolutionizing API Strategy: Lombard Odier\u0026rsquo;s Success Story with Microcks.\u0026rdquo; Sebastien Fraigneau from CNAM detailing their collaboration with “Microcks for automated SOAP service mocking”. Julien Breux from Google, at Conf42 Tech Conference for a talk on “Testing with Testcontainers in Go!”. Leon Nunes from Solo.io, meetup in Bangkok regarding “Mocking GraphQL with Microcks and Gloo Platform”. Hugo Guerrero from Red Hat, during KCD Guadalajara on Microcks journey in Spanish “Llevando tu Proyecto de Hobby a la Cloud Native Computing Foundation”. \u0026hellip;and many more, apologies to the ones I missed in this brief recap! These stories inspire us and highlight the various ways Microcks is making a meaningful impact in the real world, while also encouraging community members to contribute to enhancing the project\u0026rsquo;s visibility and promotion.\nIn 2024, we introduced two monthly community meetings, scheduled to accommodate different time zones. These sessions are open to everyone, so feel free to join us live!\n👉 Watch the recording of previous sessions.\n💡 Looking Ahead: 2025 and Beyond 2024 was a year of hard work, camaraderie, and innovation. But we\u0026rsquo;re not stopping here. In 2025, exciting news and activities are lined up to make Microcks the #1 open source solution for API and microservices mocking and testing and level up within the Cloud Native Computing Foundation (CNCF).\nStay tuned for announcements, new features, and opportunities to collaborate and grow.\nThank you for being part of this incredible journey. Here\u0026rsquo;s to a happy, healthy, and prosperous 2025! 🥂\nLet\u0026rsquo;s make the cloud native future even brighter together.\n"},{"section":"Blog","url":"https://microcks.io/blog/lombard-odier-revolutionizing-api-strategy/","title":"Revolutionizing API Strategy: Lombard Odier's Success Story with Microcks","description":"Revolutionizing API Strategy: Lombard Odier's Success Story with Microcks","searchKeyword":"","content":"Lombard Odier is a global wealth and asset manager. For over 225 years and through more than 40 financial crises, the Group has aligned itself with the long-term interests of private and institutional clients. It has a strong balance sheet with a CET1 ratio of 31.7% and a Fitch rating of AA-, the highest possible rating for a bank of its size.\nStructured as an independent partnership, Lombard Odier is solely owned by its Managing Partners. This governance model allows the Bank to remain completely client-focused and innovative at the highest level in the organisation.\nLombard Odier has quickly recognized APIs\u0026rsquo; pivotal role in modern IT strategies, to foster agility and innovation, Lombard Odier embarked on a transformative journey, integrating Microcks as a cornerstone in its API lifecycle management.\nAdding Mock and Sandbox as a Service Capability One of Lombard Odier\u0026rsquo;s IT strategies for accelerating its transformation program is the incorporation of a mock and a sandbox as a service capability within its API strategy. Recognizing the importance of testing and iterating without disrupting their live APIs, they leveraged Microcks to integrate mock services seamlessly. This approach allows developers to experiment, refine, and perfect their APIs in a controlled environment, ensuring a smooth and transparent transition to production.\nAPIOps Approach: The Driving Force Lombard Odier adopts an entire APIOps approach, treating APIs as products and orchestrating their development, deployment, maintenance, and operability. This approach aligns perfectly with Microcks\u0026rsquo; capabilities, making it the linchpin of their API strategy. Through Microcks, Lombard Odier brings agility and efficiency to their APIOps pipeline, from design to deployment.\nSupport for Multiple Specifications and Protocols Lombard Odier operates in a diverse technology landscape, employing various specifications and protocols to cater to different use cases. Microcks seamlessly integrates with OpenAPI, AsyncAPI, GraphQL, and gRPC, providing a unified platform for managing this diverse set of APIs. This flexibility ensures that Lombard Odier can choose the best tool for the job while maintaining a consistent and efficient API management process.\nA Testimonial from Ludovic Pourrat, Head of API Management at Lombard Odier. In the ever-evolving landscape of API management, Lombard Odier stands out as a trailblazer in embracing cutting-edge technologies to drive innovation and growth.\nMicrocks is a robust open source tool that has become an essential solution for Lombard Odier. It enables us to manage, maintain, and automate the lifecycle of our extensive API ecosystem efficiently. At the heart of our remarkable journey, Microcks has become a key asset that achieves the right balance between innovation and stability, empowering developers with fast iterations of their APIs.\nThe Numbers Speak: Managing Over 2000 API Endpoints with Less Than 200 Developers. Microcks\u0026rsquo; scalability and efficiency are evident in Lombard Odier\u0026rsquo;s achievement of managing over 2000 API endpoints with a lean team of fewer than 200 developers.\nMicrocks\u0026rsquo; ability to streamline workflows, automate testing in our CI/CD pipelines, and enhance collaboration ultimately allows Lombard Odier to scale its API operations seamlessly.\nConclusion Lombard Odier\u0026rsquo;s success story with Microcks exemplifies the transformative power of embracing modern API management practices. By integrating a mock and sandbox as a service capability, adopting a complete APIOps approach, and leveraging Microcks\u0026rsquo; support for various specifications and protocols, Lombard Odier has not only streamlined its API lifecycle but has also set the stage for continuous innovation and development efficiency.\nAs organizations worldwide grapple with the complexities of API management, Lombard Odier\u0026rsquo;s journey is an inspiring testament to the impact of adopting the right tools and strategies. Microcks, as a central player in their API ecosystem, has proven its worth in driving efficiency, agility, and sustained success.\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.10.0-release/","title":"Microcks 1.10.0 release 🚀","description":"Microcks 1.10.0 release 🚀","searchKeyword":"","content":"We are excited to announce today the 1.10.0 release of Microcks, the CNCF\u0026rsquo;s open-source cloud-native tool for API Mocking and Testing, ready for summer ☀️ vacations! 🚀\nFor this release, we received help from 4 new code committers and dozens of others who opened, contributed, and reviewed 46 issues. Most of them are adopters! Kudos to all of them 👏 and see greetings along the notes below.\n1.10.0 release brings you a wave of new features, including Stateful mocks support, a new lightweight API Examples specification format, tons of enhancements in the Uber and Native distributions, and a big refresh on installation dependencies.\nLet\u0026rsquo;s review the latest updates for our key highlights without further ado.\nWelcome, stateful mocks! Microcks has allowed specifying dynamic mock content using expressions since the early days. Those features help translate an API\u0026rsquo;s dynamic behavior and provide meaningful simulations.\nBut sometimes, you may need to provide even more realistic behavior, and that’s where stateful mocks may be of interest. Stateful mocks are a game-changer in the pursuit of an even smartest mocking experience. You can now experience enhanced realism in your API simulations and free your creativity!\nHowever, automatically turning mocks into stateful simulations is impossible as numerous design guidelines need to be considered. At Microcks, we put this power in the user’s hand, providing powerful primitives like scripts, store, requestContext, and template expressions to manage persistence where it makes sense for your simulations. This feature is now available at your convenience via the store service that is directly usable from scripts like this:\nstore.put(\u0026#34;my-key\u0026#34;, \u0026#34;Any value represented as a String\u0026#34;); def value = store.get(\u0026#34;my-key\u0026#34;); store.delete(\u0026#34;my-key\u0026#34;); Check our new Configuring stateful mocks how-to guide, which will take you through a real use-case of managing a realistic shopping cart where customers\u0026rsquo; items are persisted during the process.\nA new API Examples specification format While Microcks\u0026rsquo; motto is not to reinvent the wheel and reuse standard artifacts (see artifacts reference), we think 1.10.0 may be the right time to introduce our own specification format, which will be fully driven by the goal of importing mock datasets into Microcks.\nAPIExamples can be seen as a lightweight, general-purpose specification that solely serves the need to provide mock datasets. The goal of this specification is to keep the Microcks adoption curve very smooth with development teams but also for non-developers. The files are simple YAML and aim to be very easy to understand and edit.\nMoreover, the description is independent of the API protocol! We’re rather attached to describing examples depending on the API interaction style: Request/Response based or Event-driven/Asynchronous.\nAs a sample, you’ll see below the APIExamples snippet for our gRPC mock tutorial, but it would be rather the same when dealing with a REST API:\napiVersion: mocks.microcks.io/v1alpha1 kind: APIExamples metadata: name: org.acme.petstore.v1.PetstoreService version: v1 operations: getPets: All Pets: request: body: \u0026#34;\u0026#34; response: body: pets: - id: 1 name: Zaza - id: 2 name: Tigress - id: 3 name: Maki - id: 4 name: Toufik searchPets: k pets: request: body: |- { \u0026#34;name\u0026#34;: \u0026#34;k\u0026#34; } response: body: |- { \u0026#34;pets\u0026#34;: [ { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] } This format is intended to be used as a secondary artifact format. It would be a companion to our existing APIMetada format but dedicated to API Examples.\nBe sure to read our API Examples Format reference documentation that details the different properties available and how to use this format for different types of APIs.\nUber and Native images enhancements Introduced in recent Microcks releases, microcks-uber distribution and its GraalVM native variant are perfectly well-adapted for a quick evaluation or for an ephemeral usage via libraries like Testcontainers. However, they were still a bit behind the regular distribution in terms of features covered.\nStarting with 1.10.0, we reduced this feature gap a lot by making:\nMQTT and RabbitMQ/AMQP protocols available to the Uber distribution, gRPC features and full templating features work into the Native-variant of this Uber distribution. The long-term goal we’re pursuing and are close to achieving is full feature parity between the regular/uber/uber-native distributions—except for some structural ones that would be impossible to port. Typically, the Groovy SCRIPT feature will never be available in native mode as dynamic evaluation is, by definition, antagonistic to static compilation.\nIf you want to learn more about feature gap reduction and associated changesets, please refer to #1239 for MQTT support, #1240 for RabbitMQ support, #1227 for gRPC testing features support, and #1226 for templating features support.\nDependencies and installation upgrade While considering upgrading to 1.10.0, you should also plan your update carefully depending on your setup. We’ve made significant updates on external container dependencies like MongoDB, Keycloak, and theirits associated Postgres database.\nThese are noticeable changes you should take care of:\nThe centos/mongodb-36-centos7 that was no longer maintained for 3 years has been replaced by the library/mongo:4.4.29 that Is 3 months old and still updated, The quay.io/keycloak/keycloak:22.0.3 has reported CVEs and has been replaced by the fresher quay.io/keycloak/keycloak:24.0.4, The centos/postgresql-95-centos7:latest has not been updated in 5 years and has been replaced by a fresher library/postgres:16.3-alpine updated 12 days ago. Unfortunately, updating MongoDB and Postgres engines cannot be done without breaking things. That’s why we recommend not rolling in-place upgrades of existing installations but rather proceeding with care: exporting and backing up your data from MongoDB and Postgres before importing it again in new instances. This can be done with low-level tools (like mongodump and pg_dump) or at an application level (using Microcks snapshots or Keycloak realm exports).\n⚠️ Warning\nBe cautious that the dependencies Microcks proposes during installation are provided for commodity purposes only. Our take is that you shouldn\u0026rsquo;t rely on them for crucial \u0026ldquo;production\u0026rdquo; workloads but rather use an external component. You can override the default image OR completely disable the installation of external dependencies in our Helm Chart or Operator.\nIn addition to these upgrades, we also changed the way you can customize the images and external dependencies artifacts in our Helm Chart. Where we previously had a single image field for each component (the main one, postman, keycloak, mongo, etc…), we have split these single fields into multiple properties registry, repository, tag or digest like illustrated below:\nimage: registry: quay.io repository: microcks/microcks tag: nightly digest: This change brings the benefits of:\nBeing aligned with community best practices regarding image customization - other communities like OpenTelemetry, Strimzi or Jaeger are following the same conventions, Allowing easier customization for people using a corporate registry as a cache or wanting to pin the artifact coordinates to an immutable digest. Thanks to Romain Quinio 🙏 from Amadeus IT Group for bringing this enhancement suggestion to the discussion! You can check the original #1211 issue.\nCommunity amplification The Microcks community continues to grow and make waves in the tech world! Here are some of our latest highlights:\n🎤 Talk from Hugo Guerrero at Riviera DEV 2024 We are thrilled to share that Hugo (Red Hat) presented an outstanding talk at Riviera DEV 2024! His session featured a great demo showcasing Microcks\u0026rsquo; shift-left approach using Quarkus and Testcontainers. Check out his LinkedIn post for more details.\n📝 Microcks Mentioned as an Alternative to WireMock Microcks has been highlighted in Speedscale\u0026rsquo;s blog post as a top alternative to WireMock. We\u0026rsquo;re proud to be recognized among the top 5 WireMock alternatives:\nhttps://speedscale.com/blog/wiremock-alternatives/\n🌐 Microcks Joins the CAMARA Project Microcks, a Cloud Native Computing Foundation (CNCF) Sandbox project, is now officially listed as a member of the CAMARA Project, an initiative by The Linux Foundation! 🎉\nhttps://camara.landscape2.io/\n🚀 Member of CNCF App Development Working Group As a CNCF project, Microcks is proud to join the App Development Working Group within the CNCF TAG App Delivery. This initiative aims to bridge the gap between developers and CNCF projects that directly impact daily workflows 🙌\nhttps://www.cncf.io/blog/2024/07/05/a-new-app-development-wg-has-now-been-launched/\n📢 Shoutout to Java Dominicano Community A massive thank you to the Java Dominicano community and a special shoutout to Eudris Cabrera for his outstanding talk and demo on Microcks! 🌟\n🎉 Microcks Hits 2000+ Followers on LinkedIn! We are excited to announce that we have reached over 2000 followers on LinkedIn! Join us to stay updated on the latest news and updates about Microcks. Follow Us on LinkedIn.\nStay tuned for more updates, and continue to be a part of our journey as we grow and innovate together!\nWhat’s coming next? As usual, we will eagerly prioritize items according to community feedback. You can check and collaborate via our list of issues on GitHub and the project roadmap.\nMore than ever, we want to involve community members in design discussions and start some discussion about significant additions regarding OpenAPI callbacks, webhooks and AsyncAPI in Microcks. Please join us to shape the future!\nRemember that we are an open community, which means you, too, can jump on board to make Microcks even greater! Come and say hi! on our GitHub discussion or Discord chat 👻, send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-a-thriving-year-in-the-cncf-sandbox/","title":"A Thriving year in the CNCF Sandbox and Its Transformative Impacts","description":"A Thriving year in the CNCF Sandbox and Its Transformative Impacts","searchKeyword":"","content":"In the ever-evolving landscape of open-source software, achieving recognition and support from reputable foundations can be a game-changer for projects. This was precisely the case for Microcks, an innovative API mocking and testing project. When Microcks joined the CNCF (Cloud Native Computing Foundation) Sandbox a year ago, opportunities opened. In this blog post, we\u0026rsquo;ll delve into Microcks\u0026rsquo;s exciting journey as it embraced its CNCF Sandbox status and explore the profound positive impacts the project experienced in its first year within the foundation.\nKubeCon EU 2024 at the Microcks Project Pavilion booth, showcasing core contributors from Red Hat (Hugo Guerrero), Google (Julien Breux) and Microcks core maintainers (Laurent Broudoux \u0026 Yacine Kheddache). Microcks\u0026rsquo; Integration into the CNCF Sandbox: A Milestone of Success\nMicrocks took a significant step forward by becoming part of the CNCF Sandbox, a space dedicated to nurturing early-stage cloud-native projects. The CNCF, well-known for supporting and promoting cloud-native technologies, provided Microcks with a platform that validated its potential and exposed it to a vast community of developers, enterprises, and enthusiasts. This integration was marked by Microcks\u0026rsquo; announcement in the blog post titled \u0026ldquo;Microcks Joining CNCF Sandbox\u0026rdquo;, a testament to our community dedication and ambition.\nThe 1-Year Transformation: Positive Impacts on Microcks 1. Rapid Growth and Adoption\nOne of the most visible impacts of Microcks\u0026rsquo; inclusion in the CNCF Sandbox is its exponential growth. This growth is vividly evident in the ADOPTERS.md file on Microcks\u0026rsquo; GitHub repository. This file is a directory of organizations and projects that have embraced Microcks as part of their development workflow. Over the first year, the list of adopters expanded significantly, reflecting the rising popularity and utility of the Microcks project.\nPublic adopters have grown from 1 to 16 in just a year, spanning various verticals and geographies! 🌍🚀😊 Private adopters (companies who are not yet listed publicly but are in touch with the community) now include more than 20 organizations\u0026hellip; 🚀🤝😊 Moving from private to public adopters is a small contribution that makes a big impact. It greatly helps the project gain momentum and credibility. Your support is truly important for our growth! 🌟🙏\nAs enterprises and developers integrated Microcks into their development lifecycle practices, it became evident that the CNCF affiliation increased the project\u0026rsquo;s visibility and instilled higher trust among our community users. Being part of the CNCF family inherently brings a sense of credibility and reliability, a factor that likely contributed to Microcks\u0026rsquo; rapid adoption.\nKeep discovering by reading adopters\u0026rsquo; testimonials:\nCNAM Partners with Microcks for Automated SOAP Service Mocking Extend Microcks with custom libs and code J.B. Hunt: Mock It till You Make It with Microcks 2. Embracing the \u0026ldquo;Shift Left\u0026rdquo; Paradigm\nMicrocks\u0026rsquo; tenure in the CNCF Sandbox has reinforced the importance of the \u0026ldquo;shift left\u0026rdquo; approach to software development. The \u0026ldquo;shift left\u0026rdquo; paradigm emphasizes addressing issues as early in the development cycle as possible, reducing the chances of critical problems emerging in later stages. In Microcks’ context, this translates to incorporating API mocking and testing right from the outset of development.\nThe shift left approach was expertly outlined in the informative article \u0026ldquo;Mocking and Contract Testing in Your Inner Loop with Microcks\u0026rdquo;. This piece elaborates on how Microcks\u0026rsquo; inner loop focus empowers developers to mock APIs and run contract tests early in their development cycle. This leads to swifter identification and resolution of issues, allowing developers to iterate more efficiently and ultimately deliver higher-quality software.\nMetrics for Testcontainers Microcks’ module downloads and usage are rapidly increasing, demonstrating the need and enthusiasm of developers: 🌟\nFrom 500 downloads in Q4 2023 🚀 To over 2K downloads in May 2024 🎉 The same goes for our popular Docker Desktop Extension: listed among the top extensions every developer must try and installed over 6.4K times! 🚀🌟😊\n⏩ Very easy and straightforward to start with Microcks in 3 mins, watch: Getting Started with Microcks Docker Desktop Extension\nSince day one, the Microcks project has been robust and efficient in managing all kinds of API use cases for your outer loop. Based on adopters and community feedback, we now have a lighter and faster version of Microcks named microcks-uber. By packaging our Microcks Java application into a platform-native binary using GraalVM native and Spring Boot AOT compilation, we\u0026rsquo;ve achieved an incredible startup time of just 300 milliseconds, consuming very few resources!\nWe are now the ultimate tool to bridge the gap between development on a laptop and centralized, highly scalable operations on Kubernetes. 🚀💻🌐 See our article regarding: “How Microcks fit and unify Inner and Outer Loops for cloud-native Development” 💡\n3. Better Together: Eco-System Orientation\nThe CNCF Sandbox status brings a sense of camaraderie and the opportunity to be part of a broader ecosystem. This \u0026ldquo;better together\u0026rdquo; mindset encourages projects to explore integrations and collaborations that enrich the developer experience.\nMicrocks\u0026rsquo; alignment with the CNCF ecosystem is a testament to this ethos. The project isn\u0026rsquo;t just about standalone functionality and fitting into the larger cloud-native landscape. Microcks demonstrates its commitment to creating a seamless developer experience within the CNCF ecosystem by promoting compatibility with other cloud-native technologies.\nExplore our ecosystem partnerships by discovering key articles and videos highlighting our collaborations with OpenTelemetry and Grafana Labs, Testcontainers, Docker, Backstage, Canonical, Red Hat, GitLab, Solo.io and more if you follow us ;-)\nMicrocks, by the numbers and key metrics Microcks maintainers, care about metrics and are committed to continuous improvement. We\u0026rsquo;re excited to share how we\u0026rsquo;ve leveled up our contributions and community engagement through CNCF metrics. Our diversity of contributors, contributions growth, and GitHub activity have all shown remarkable progress. Dive into the details and explore our journey using the CNCF DevStats, Linux Foundation Insights, and some great help from our friends at Bitergia, where you can double-check the data or dig deeper into our success story. Let\u0026rsquo;s celebrate our growth and the power of open-source collaboration! 🎉\nWorldwide diversity of contributors, with notable contributions growth from APAC (mainly China) over the last 12 months: 🌍\nMore and more contributions are coming from organizations that rely on Microcks and have decided to participate in project maintenance and evolution to secure and invest in their supply chain. 😊\nNet Newly Attracted Affiliations: 25 new organizations with developers actively participating in the Community! 🚀\nGit Overview 🚀 910 commits this year vs. 486 commits last year 📈\nAttracted New Developers Continue to Rise! 🌟 The growth has skyrocketed since we joined the CNCF! 😊\nConclusion Microcks\u0026rsquo; journey within the CNCF Sandbox has been nothing short of transformative. From achieving remarkable growth and adoption to embracing the shift left approach and contributing to a better-together ecosystem, the project\u0026rsquo;s trajectory has been infused with energy, innovation, and a collaborative spirit. This one-year period highlighted the inherent advantages of being part of a foundation that champions cloud-native technologies and encourages projects to reach new heights.\nAs the open-source community continues to evolve, success stories like Microcks remind us of the immense value that foundation affiliations can bring. By fostering an environment of support, collaboration, and growth, foundations like the CNCF provide projects with the tools they need to make a lasting impact on the world of software development. With its remarkable journey thus far, Microcks is a shining example of how a project\u0026rsquo;s affiliation with the proper foundation can propel it to new heights of success and innovation.\nStay tuned as Microcks gears up to elevate within the CNCF! We\u0026rsquo;re excited to announce our plans to submit and launch the qualification process to become an incubating project in the foundation during our second year! 👀\n"},{"section":"Blog","url":"https://microcks.io/blog/cnam-soap-service-mocking/","title":"CNAM Partners with Microcks for Automated SOAP Service Mocking","description":"CNAM Partners with Microcks for Automated SOAP Service Mocking","searchKeyword":"","content":"With over 2,500 employees, the Caisse Nationale de l’Assurance Maladie CNAM is the operational \u0026ldquo;headquarters\u0026rdquo; of France\u0026rsquo;s compulsory health insurance system. We play a pivotal role in ensuring access to healthcare for all French citizens, overseeing and funding health insurance coverage for employees and their families.\nAdditionally, we coordinate with and assist the local organizations within our network, which consists of 164 entities deployed nationally, regionally, and locally throughout France. We rely on SOAP (Simple Object Access Protocol) for our historical and mission-critical legacy systems to facilitate seamless information exchange among these organizations.\nSource (🇫🇷): https://www.assurance-maladie.ameli.fr/qui-sommes-nous/organisation/reseau-proximite At CNAM, we manage hundreds of services that process a significant data flow daily, with each potentially relying on others. In our development and testing phases, we depend on thousands of simulations representing different versions of each service. Throughout all project phases, multiple individuals or groups utilize these simulations, contributing to our extensive use of datasets. This poses challenges in maintaining and accelerating testing, validation, interoperability, and conformance at scale.\nAPIs and web services at CNAM With a diverse ecosystem of healthcare organizations and systems, interoperability is critical to ensuring that data can be exchanged and understood across different platforms. APIs provide standardized interfaces that allow disparate systems to communicate and share data effectively and securely, regardless of the underlying technology stack.\nBy exposing functionalities through APIs, CNAM can automate various processes and workflows, increasing efficiency and reducing manual effort. This automation streamlines administrative tasks, reduces errors, and frees up resources to focus on more critical aspects of healthcare delivery.\nCNAM relies on SOAP for its legacy systems, which are mission-critical for its operations. APIs and web services enable the integration of these legacy systems with modern applications and technologies, ensuring that CNAM can leverage its existing infrastructure while embracing innovation.\nAs CNAM\u0026rsquo;s network continues to evolve and grow, APIs provide scalability and flexibility to adapt to changing requirements and accommodate new technologies and services. This agility allows CNAM to respond quickly to emerging healthcare challenges and opportunities.\nBenefits of the Solution CNAM previously used a homemade mocking solution, which was statically built and required consumers to provide business examples and behavior. This approach consumes many infrastructure resources and generates drift, disparities, and non-reusable assets between organizations within the ecosystem.\nDuring our research for an API mocking and testing solution, we discovered Microcks and immediately embraced its open-source, community-driven, and very innovative approach to managing all kinds of APIs using the same facilities, including SOAP, which is mandatory for our systems.\nWe also realized the power and advantages of moving from a consumer to a provider-driven approach. We recognized many of our pains of being 100% consumer-driven for all our business datasets and examples in the article from Laurent Broudoux. It has completely changed our mindset and the way we are now using mocking and sandboxes in our development lifecycle.\nAt CNAM, we have chosen Microcks to accelerate and automate the simulation (mocking) of our 450 SOAP services and more than 100 Oracle Tuxedo processing and transaction systems exposed via SOAP. Our usage of Microcks replaces the existing internal solution and offers several benefits tailored to our needs.\nAccelerated and Automated Simulation: Microcks accelerates and automates the simulation (mocking) of CNAM\u0026rsquo;s extensive suite services and data processing systems. This streamlines our internal processes and reduces manual effort, leading to faster development cycles.\nReuse of Existing Datasets: By leveraging Microcks, we can reuse existing datasets, eliminating the need to recreate mocks for each service. This not only saves time but also ensures consistency across different testing scenarios.\nFully Automated Sandboxes as a Service: Microcks empowers us to provide fully automated sandboxes to all consumers, accelerating development and testing workflows.\nSelf-Service Mock Generation: With Microcks, we enable self-service mock generation for API consumers, empowering developers to iterate quickly and test their applications effectively.\nCNAM Mock admin web application workflow. Using the Mock Admin application based on Microcks has significantly streamlined our testing processes. Its intuitive interface and flexibility allowed us to create customized mocks to simulate complex and mission-critical scenarios, crucial for our automated tests with the INS and SNGI databases. Microcks has demonstrated exceptional reliability and performance, enhancing the quality of our tests and allowing us to detect anomalies early in our development and validation process. Laurent Fontaine, Application Owner at CNAM\nThe datasets imported in Microcks are formatted as CSV files, containing various information such as letters, words, phrases, numbers, tables, or regular expressions (regex).\nCSV-formatted CNAM dataset and Microcks dynamic custom dispatch. This structure demonstrates that the first column acts as a discriminator value, the last column specifies the response name, and the remaining columns inject mock data into the response context.\nOverall, Microcks\u0026rsquo; self-service and on-demand capabilities enable us to speed up complex development and validation processes, ensuring efficient and reliable healthcare services for all stakeholders (including non-technical associates using XLS and CSV files to provide business examples). Additionally, it reduces infrastructure size and consumption, aligning with our sustainability objectives.\nFinally, we\u0026rsquo;re extensively leveraging Microcks\u0026rsquo; extensibility and custom libraries in the API dispatching process. This has been detailed in a technical blog post titled:\n\u0026ldquo;Extend Microcks with custom libs and code\u0026rdquo;.\nBy leveraging existing datasets and libraries, we seamlessly integrate business and functional behavior validation into Microcks mocks, enabling dynamic generation. This not only enhances efficiency and accuracy but also significantly reduces time to market and improves delivery timelines.\nNext objective: test automation Initially, we primarily used Microcks for mocking services, but now we are working to expand its usage to include comprehensive testing capabilities. This transition will significantly advance our testing strategy, allowing us to achieve greater efficiency, reliability, and agility in the CNAM software development lifecycle.\nWith Microcks\u0026rsquo; support for non-regression tests and validation, we can ensure that any changes or updates to our APIs do not introduce regressions or break existing functionality. By automating these tests within our existing CI/CD pipeline, we will be able to identify and address issues early in the development process, minimizing the risk of introducing bugs or defects into production environments.\nAutomated testing reduces manual quality assurance effort and enables faster feedback loops, allowing developers to iterate more quickly and confidently to improve overall development velocity.\nCNAM Mock admin web application to Microcks’ automation tests pipeline. This step is essential for our progress, paving the way for additional opportunities to enrich our development lifecycle with Microcks now that the solution is in production.\nContributing to Open-Source At CNAM, we are vested in contributing to the open-source Microcks community upstream for several compelling reasons.\nFirstly, in alignment with the French and European governments\u0026rsquo; open-source directive and its emphasis on digital sovereignty, We recognize the strategic importance of investing in and actively participating in open-source projects. By contributing to Microcks, CNAM not only strengthens its own digital sovereignty but also contributes to the broader ecosystem of open-source solutions, which are essential for the success of our mission and objectives.\nBy actively participating in Microcks\u0026rsquo; development and enhancement, we can contribute to the project\u0026rsquo;s direction, tailor its features and functionalities to suit CNAM\u0026rsquo;s needs better and guarantee the longevity and sustainability of our supply chain. This ensures that Microcks remains a dependable and effective tool in our software development processes.\nMoreover, by participating in an open-source project like Microcks, CNAM employees can enhance their skills, collaborate with a diverse community of developers, and contribute to advancing technology in their field. This can lead to increased job satisfaction, professional growth, and a sense of pride in contributing to a project that positively impacts both CNAM and the broader software development community.\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.9.1-release/","title":"Microcks 1.9.1 release 🚀","description":"Microcks 1.9.1 release 🚀","searchKeyword":"","content":"Just two months after the previous release, we are thrilled to announce this brand-new Microcks version! Please welcome the 1.9.1 release of Microcks, the Open-source cloud-native tool for API Mocking and Testing 🙌\nWith no less than 30 newly resolved issues, this release is special as it is the first to have so many issues directly contributed by community users! 25 evolutions out of 30 directly come from them! Kudos to our community 👏, and see greetings along the notes below.\nNew Proxy features Aside from the numerous enhancements we\u0026rsquo;ll discuss just after, this release\u0026rsquo;s main new feature is the addition of Proxy behavior, which Nikolay Afanasyev introduced in this blog post.\nAs stated in this post, 1.9.1 comes with two new dispatchers called PROXY and PROXY_FALLBACK. While PROXY is a simple passthrough to an external backend service, PROXY_FALLBACK handles a bit more logic and allows to call the external backend service only if Microcks doesn’t find a matching mock response at first.\nOne great thing is that this new proxy logic has been implemented consistently for REST, SOAP and GraphQL APIs in Microcks. Check the full blog post for more details. Thanks again, Nikolay! 🙏\nNoticeable enhancements Below is a list of noticeable enhancements and shootouts to people who contributed them without order or preference.\nTemplating of responses’ headers This enhancement lets you specify response header values using Microcks specific {{ }} template notation. The {{ }} notation is a placeholder that can be replaced with dynamic values. You can use it to return random values (think of a UUID as a transaction or correlation identifier) or request-based values. It was, for example, used to implement OpenID Connect mocks.\nheaders: \u0026#39;Location\u0026#39;: schema: type: string examples: generic: value: \u0026#34;{{ request.params[redirect_uri] }}?state={{ request.params[state] }}\u0026amp;code={{ uuid() }}\u0026#34; Check #1097 for more details on this, and thanks again to Nikolay 🙏 for the initial discussion and Pull Request.\nJSON pointers extended usage This enhancement allows you to reference arrays or array elements in mock responses - still using the {{ }} notation placeholder. Arrays or their elements will be directly serialized as JSON and integrated as such into the response body. For example, you can use this template:\n{ \u0026#34;allBooks\u0026#34;: {{ request.body/books }}, \u0026#34;firstBook\u0026#34;: {{ request.body/0 }} } to get results like:\n{ \u0026#34;allBooks\u0026#34;: [{ \u0026#34;title\u0026#34;:\u0026#34;Title 1\u0026#34;, \u0026#34;author\u0026#34;:\u0026#34;Jane Doe\u0026#34; },{ \u0026#34;title\u0026#34;:\u0026#34;Title 2\u0026#34;, \u0026#34;author\u0026#34;:\u0026#34;John Doe\u0026#34; }], \u0026#34;firstBook\u0026#34;: { \u0026#34;title\u0026#34;:\u0026#34;Title 1\u0026#34;, \u0026#34;author\u0026#34;:\u0026#34;Jane Doe\u0026#34; } } Check #1139 for more details on this, and thank Andreas Zöllner 🙏 for proposing and writing this enhancement.\nObject query parameters support This enhancement adds support for serializing an object\u0026rsquo;s properties as request parameters. It follows the serialization rules for style: form parameters with explode: true, which translates in OpenAPI by having a query parameter of object type.\nSo typically, you may define a GET /users?name=Alex\u0026amp;age=44 endpoint where the query parameter is a User object. How cool! 😎\nCheck #1143 for more details, and thanks to Samuel Antoine 🙏 for proposing and writing this enhancement.\nWebapp enhancements and linting While functional, the Microcks web app undoubtedly needs more love and enhancements as it is not the expertise field of original maintainers 😉 Thanks to community contribution, we’re now in better shape and have people still seeking improvements.\nWe have some noticeable improvements here:\nSelecting/deselecting operations when launching a test can now be done with a single checkbox, Broken unit tests have been removed from the codebase, Broken links to documentation have been fixed, Linting of application and refactoring for better standard respect has been applied, Analyses on how we could move to fresher dependencies or frameworks are coming. Check #1153, #1163, #1166 and #1171 contributions by Siarhei Saroka 🙏 for more details.\nMore Tests Contributing to an Open Source project is not only a matter of writing code. Starting with documentation and adding new tests is a great way to get hands-on experience. As Microcks becomes a critical tool for many organizations, increasing the coverage of our test suite is essential.\nWe moved from 36.8% coverage on January 1st to 48.4% as of today! This is great progress for a code base of nearly 12K lines of code, and we can still get even better!\nCheck #1128, #1130 and #1150 for awesome contributions on tests by Matheus Cruz 🙏\nDocumentation refactoring effort We also want to take the opportunity of this release notes post to announce a significant refactoring effort on documentation. As stated above, Microcks has become critical and attracts more and more newcomers. The documentation needs to be reorganized to better assist with onboarding and help users find what they’re looking for.\nWe came up with a new approach to the documentation structure exposed in this GitHub thread. Our main goal is to clarify the categorization of information between Tutorials, Guides, Explanations, and Reference materials. We want to make it easier and faster for newcomers to find valuable information depending on where they are in their learning process. We also want to make it easier for community users to contribute new content easily.\nFor all of that, we need your help! So, if you are or would love to be a Tech Writer and want to contribute to this cool Open Source project, please join us and share your recipes and experiences to improve our documentation!\nWhat’s coming next? As usual, we will eagerly prioritize items according to community feedback. You can check and collaborate via our list of issues on GitHub and the project roadmap.\nRemember that we are an open community, which means you, too, can jump on board to make Microcks even greater! Come and say hi! on our GitHub discussion or Discord chat 👻, send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/new-proxy-features-1.9.1/","title":"Introducing new Proxy features in Microcks 1.9.1","description":"Introducing new Proxy features in Microcks 1.9.1","searchKeyword":"","content":"We all know about the many benefits of Microcks, which makes it an excellent tool for the software development process. When I first started using it, everything went smoothly. However, there is no limit to QA\u0026rsquo;s expectations 😊\nRecently, a quality assurance specialist approached me with a request: \u0026ldquo;The mocks work fine, but we need to implement a new testing scenario. Every third or random response should be a mock-up, while the rest should be from the actual service\u0026hellip;\u0026rdquo;\nTo achieve this, we could use an external load balancer, for example. Additionally, we would need a new configuration point for the balancer, its behavior, and so on\u0026hellip; This is not the Samurai way 😊\nLuckily, Microcks is an open-source project with an active community. After discussing the issue with Laurent, we created two dispatchers to provide simple and advanced proxy logic available for REST, SOAP, and GraphQL protocols.\nThose two new dispatchers - to be released in the 1.9.1 version of Microcks - are called PROXY and PROXY_FALLBACK. While PROXY acts exactly as its name suggests, you’ll see that PROXY_FALLBACK handles a bit more logic and allows handling advanced use cases. Let’s dive into the explanations!\nThe new PROXY behaviors The simple PROXY dispatcher simply changes the base URL of the Microcks and makes a call to the real backend service.\nEnabling the PROXY dispatcher is a per-operation setting. That means that within the same API, you may have some operations that use regular mocks and others that just delegate API calls to a real backend system. Your client still calls the Microcks endpoints though - allowing you a smooth transition from mocked inexistent operation to ready-ones on a real implementation.\nThe advanced PROXY_FALLBACK dispatcher works similarly to the FALLBACK dispatcher, but with one key difference: when no matching response is found within the Microcks’ dataset, instead of returning a fallback response, it changes the base URL of the request and makes a call to the real service.\nHere again, enabling this proxy mechanism is a per-operation setting that allows you to mix different behaviors in the same Microcks API endpoints. Hence, you can use regular mocks, or try to proxy a request when nothing is found on the Microcks’ side or always proxy to the real backend.\nHow do we enable them? Dispatcher configuration in Microcks is a per-operation setting, so enabling PROXY or PROXY_FALLBACK must be done at the operation level. There are many ways to set up this part in Microcks - overriding the inferred default dispatching mechanism, you can: use the UI, use the API, use a Metadata artifact, or use API specification extensions like we did just below.\nPROXY To enable the PROXY dispatching, we only need to specify PROXY as the dispatcher and the base URL of the actual service as the dispatcherRules:\nx-microcks-operation: dispatcher: PROXY dispatcherRules: http://external.net/myService/v1 PROXY_FALLBACK The configuration of the PROXY_FALLBACK dispatcher is similar to that of the FALLBACK dispatcher. However, instead of the fallback key and its value which refers to the response name, we have the proxyUrl key with the base URL of the actual service as its value:\nx-microcks-operation: dispatcher: PROXY_FALLBACK dispatcherRules: | { \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;proxyUrl\u0026#34;: \u0026#34;http://external.net/myService/v1/\u0026#34; } Myriad of opportunities! So, now we have two more ways to customize Microcks\u0026rsquo; behavior. Why would we need this?\nWe can now set up Microcks\u0026rsquo; URL for our client application and switch between mocking and calling the actual backend service by turning on/off the proxy dispatcher in the Microcks UI without any changes to the client application configuration.\nAnd if we need some kind of automated, exotic logic, we can use the power of the SCRIPT dispatcher, which the PROXY_FALLBACK dispatcher wraps. This is the approach I took to implement the behavior that the QA engineer requested.\nx-microcks-operation: dispatcher: PROXY_FALLBACK dispatcherRules: | { \u0026#34;dispatcher\u0026#34;: \u0026#34;SCRIPT\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;if (new Random().nextInt(3) == 0) { return \u0026#39;mock\u0026#39;; }\u0026#34;, \u0026#34;proxyUrl\u0026#34;: \u0026#34;http://external.net/myService/v1/\u0026#34; } I really hope this new feature will be helpful. Based on my own experience, I would say that the contribution process is easy and comfortable. The journey from an idea to the finished feature has been a pleasant experience for me. So If you have any ideas on making Microcks more powerful, please feel free to share your thoughts with the community!\n"},{"section":"Blog","url":"https://microcks.io/blog/mocking-oidc-redirect/","title":"Mocking OpenID Connect redirect","description":"Mocking OpenID Connect redirect","searchKeyword":"","content":"A few days ago, I worked on a new prototype to see what it means to use Microcks to mock OpenID Connect authentication flows. As Zero-trust security model is now the norm in this cloud and distributed computing era, developers must integrate this from the beginning of their application development. However, accessing an Identity Provider (IDP) is not always convenient depending on your working situation - thinking of remote access, disconnected place, stateful provider inconsistencies, etc. Hence, there is an opportunity for light mocks that work locally, at the network level!\nWhile describing those mocks, I thought about the redirection part of the authentication flows. You may know: the typical situation where the client provides some state and a redirect URL to the IDP and where the IDP should send an HTTP Redirect to the location, with state and a new token or authorization code. Describing this may be quite complex as it typically involves behavior transcription. However, I found what I think is a nice and elegant situation to describe and mock thanks to Microcks advanced features 😉\nSo, to tackle this problem of describing an OIDC redirection, we have to use three advanced features of Microcks. The first one has already been released and is available; the two others will be released in 1.9.1, coming mid-May. For reference, and if you want to learn more about them, we’ve used:\nNo content response support: #944 Response headers templating: #1097 Response with headers only: #1142 Start with OpenAPI To start my prototype, I have initialized an OpenAPI specification based on GitHub Authorization flow for OAuth apps. It describes the different query parameters and the response with the 302 HTTP response code. You’ll see in the snippet below that each parameter contain an example named genericand that my specification also contains specific x-microcks attributes and notations like {{}} expressions. We’ll dive into their explanations just after!\npaths: /login/oauth/authorize: get: parameters: - name: response_type in: query description: Expected response type schema: type: string examples: generic: value: code - name: client_id in: query description: The client identifier for the OAuth 2.0 client that the token was issued to. schema: type: string examples: generic: value: GHCLIENT - name: scope in: query description: String containing a plus-separated list of scope values schema: type: string examples: generic: value: openid+user:email - name: state in: query description: Client state that should appear in redirect directive schema: type: string examples: generic: value: e956e017-5e13-4c9d-b83b-6dd6337a6a86 - name: redirect_uri in: query description: Redirect to this URI after successful authorization schema: type: string format: uri examples: generic: value: http://localhost:8080/Login/githubLoginSuccess x-microcks-operation: dispatcher: FALLBACK dispatcherRules: |- { \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARAMS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;response_type\u0026#34;, \u0026#34;fallback\u0026#34;: \u0026#34;generic\u0026#34; } responses: \u0026#39;302\u0026#39;: description: Redirect x-microcks-refs: - generic headers: \u0026#39;Location\u0026#39;: schema: type: string examples: generic: value: \u0026#34;{{ request.params[redirect_uri] }}?state={{ request.params[state] }}\u0026amp;code={{ uuid() }}\u0026#34; 1️⃣ Let’s begin with explanations on the x-microcks-refs attribute with the 302 response.\nMicrocks mocks are based on request/response pairs collected within API artifacts. However, in this situation where the 302 response has no content, we cannot directly associate a response example with request elements. The x-microcks-refs covers this situation and explicitly tells that we have a request/response pair where elements matching the generic request will be associated with a 302 response. We have a request/response pair!\n2️⃣ Now, let’s check the x-microcks-operation attribute within the operation definition.\nMicrocks mocks use dispatchers and dispatching rules to identify a request/response pair - and so the response to return - by analyzing incoming request elements. Microcks is inferring the dispatcher to use when not specified by checking all the request elements. In our case, we don’t want that as we always want to return the 302 response we namedgeneric. So for that, we define a custom dispatcher as a FALLBACK that will, when failing, return the generic response. The root dispatcher URI_PARAMS will never match here, and we will always have a 302 response.\n3️⃣ Finally, explore the templating features using the {{}} notations for the location header on the last line.\nMicrocks can use templates for mock responses content and now header values!\nrequest.params[redirect_uri] will be evaluated and replaced by the value of the redirect_uri query parameter of the incoming request. Allowing to navigate to the target location, request.params[state] will be evaluated and replaced by the value of state query parameter of the incoming request. Allowing you to transfer state back to the target location, uuid() will be evaluated as a function and replaced by the value of a new Universally Unique IDentifier. Easy, no? 😜\nTest our mock Loading this OpenAPI specification file into Microcks will give you a local endpoint ready to receive requests and use our mock response. My base URL is http://localhost:8080/rest/GitHub+OIDC/1.1.4 as I mimic this OIDC on GitHub OIDC API. Let’s try it out with a CURL command:\n$ curl -X GET \u0026#39;http://localhost:8080/rest/GitHub+OIDC/1.1.4/login/oauth/authorize?response_type=code\u0026amp;client_id=GHCLIENT\u0026amp;scope=openid+user:email\u0026amp;redirect_uri=http://localhost:8080/Login/githubLoginSuccess\u0026amp;state=e956e017-5e13-4c9d-b83b-6dd6337a6a86\u0026#39; -v ==== OUTPUT ==== [...] \u0026gt; \u0026lt; HTTP/1.1 302 \u0026lt; Access-Control-Allow-Origin: * \u0026lt; Access-Control-Allow-Methods: POST, PUT, GET, OPTIONS, DELETE \u0026lt; Access-Control-Max-Age: 3600 \u0026lt; Access-Control-Allow-Headers: host, user-agent, accept \u0026lt; Location: http://localhost:8080/Login/githubLoginSuccess?state=e956e017-5e13-4c9d-b83b-6dd6337a6a86\u0026amp;code=5bd0c5f6-bf26-4892-a10a-a4cbcb0cc17f \u0026lt; X-Content-Type-Options: nosniff \u0026lt; X-XSS-Protection: 0 \u0026lt; Cache-Control: no-cache, no-store, max-age=0, must-revalidate \u0026lt; Pragma: no-cache \u0026lt; Expires: 0 \u0026lt; Content-Length: 0 \u0026lt; Date: Mon, 22 Apr 2024 09:58:00 GMT \u0026lt; * Connection #0 to host localhost left intact Yep! That works well! And each and every time I send new requests, I’ll receive a new code in the Location header of my 302 response! 🎉\nThinking about it 💭 Technical prowess is always admirable, but what I like the most about the final result of this enhanced OpenAPI spec is that it brilliantly illustrates the power of examples! Not only do examples allow the API consumer to grasp the real, contextual meaning of information, but they also allow understanding of part of the API behavior when expressed using Microcks features! In this OIDC case, it’s clear that the redirection is not just textual documentation lying somewhere on a website. It’s something that becomes part of the API specification, which may avoid inconsistency and speed up the onboarding of new consumers discovering the API. And BTW, it will allow this consumer to test it out in real life using Microcks mocks quickly!\nWhat if you don’t like putting specific extensions in your OpenAPI specification or mixing concerns? We’ve got you covered! Because Microcks supports a multi-artifacts definition of mocks, you can actually split those specific notations into another OpenAPI file which will be treated as an overlay. That’s what we’ve done with this OIDC sample, available on this GitHub repository.\nThanks for reading and do not hesitate to reach out if you want to help push this OIDC prototype further.\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.9.0-release/","title":"Microcks 1.9.0 release 🚀","description":"Microcks 1.9.0 release 🚀","searchKeyword":"","content":"This has been a busy week just before KubeCon EU, but we are delighted to announce the 1.9.0 release of Microcks, the CNCF\u0026rsquo;s open-source cloud-native tool for API Mocking and Testing,\nWe received help from 6 different external code committers and dozens of others who opened and reviewed issues and contributed ideas or blog posts. Most of them are adopters! Kudos to all of them 👏 and see greetings along the notes below.\nThe theme of this release is Time as illustrated by the highlights of this release:\nAsynchronous and parallel Time through the support of the new AsyncAPI v3 specification, Observed and measured Time with a lot of work done around the addition of OpenTelemetry support and a benchmarking suite for Microcks, Reduced startup Time with a new declination of Microcks that uses GraalVM native and Spring AOT compilations undercover to give you unprecedented fast bootstrap ⚡ Let’s do a review of what’s new on each one of our highlights without delay.\nParallel time with AsyncAPI v3 support Just three months after the official announcement of this significant spec update, Microcks is the first solution to support v3 for mocking and testing Event-Driven Architectures! Moreover, the eight protocols supported for AsyncAPI v2 are also directly available for AsyncAPI v3 in Microcks, ensuring a smooth transition for your team!\nWe also took advantage of our recent work on OpenAPI complex structures, to integrate many enhancements in our AsyncAPI v3 importer! Consequently, Microcks can now follow spec fragments or document references (using the $ref keyword) everywhere! It could be in a single or multiple files; referenced using absolutes or relatives URLs!\nMicrocks can also now fully support the parametrized channel addresses of AsyncAPI v3. This feature enables you to define how destinations on brokers can be dynamically referenced using message payload elements like the below:\nchannels: lightingMeasured: address: smartylighting.streetlights.1.0.event.{streetlightId}.lighting.measured [...] components: parameters: streetlightId: description: The ID of the streetlight. location: $message.payload#/streetlightId In this situation, Microcks will dynamically create and manage corresponding destinations on your broker, depending on your mock messages. Imaging a message with streetlightID value being 01234 it will create a smartylighting.streetlights.1.0.event.01234.lighting.measured Kafka topic or SQS queue, and for another message with streetlightID value being 56789 it will create another smartylighting.streetlights.1.0.event.56789.lighting.measured Kafka topic or SQS queue, for example.\nFinally, remember that our AsyncAPI v3 capabilities in Microcks support JSON or Avro schema, with integration with Schema Registry - and can also be instrumented by our AI Copilot 🤖 to help you quickly generate rich mock datasets!\nBe sure to check our updated AsyncAPI mocking and testing documentation.\nObservability with OpenTelemetry and all As part of this new 1.9.0, we are also excited to unveil extended monitoring and observability features in Microcks. Adding those features was critical as more and more organizations rely on Microcks, at least for two reasons:\nIt is used in performance testing scenarios, and people have to be sure Microcks will not be the bottleneck, It became a frequently updated centerpiece, and people have to ensure new releases do not bring regressions. As part of the CNCF ecosystem, it was a natural decision that the way to go was to provide a comprehensive integration with the OpenTelemetry initiative. OpenTelemetry is a collection of APIs, SDKs, and tools that provide an open, vendor-agnostic way to instrument, generate, collect, transform, and export telemetry data.\nHowever, instrumenting and plugging Microcks into an OpenTelemetry Collector was not enough… We wanted to provide assistance in the process of visualizing, analyzing, and exploring the collected data. As a consequence, we now offer a comprehensive Grafana dashboard. That way you get a direct digest of all the collected information with instant access to performance metrics per mock endpoints, including TPS and response time percentile information as illustrated below:\nFinally, as generating load on Microcks can be complex for new users, we added a benchmarking suite to Microcks 1.9.0! Easy to go for beginners, this suite allows you to simulate Virtual Users on different usage scenarios and gather raw performance metrics of your instance. It can also be used directly, even if you don’t have or use the OpenTelemetry or Grafana services.\nUsing this benchmarking suite, we got an impressive 756.5 hits/second with a p(90) response time of 28.2ms during the bench on a Macbook M2 with a 400MB heap! 🚀\nCheck out this blog post on Observability for Microcks at scale with a comprehensive walkthrough on the different new features. Thanks to Alain Pham 🙏 from Grafana Labs for this excellent contribution!\nReduced bootstrap time with GraalVM The latest highlight of this 1.9.0 release is about reducing the bootstrap time of a Microcks instance. As our Testcontainers module is getting traction (more than 2K downloads per month) for integrating API mocking and testing into your local development workflow, we wanted to further enhance the developer experience. Sure we made some improvements with the microcks-uber container image, allowing you to start a Microcks instance in 2-3 seconds, but we thought we could do best…\nEnter the new docker run -p 8585:8080 -it quay.io/microcks/microcks-uber:1.9.0-native command:\nAnd yes! ⚡🚀⚡ See now this 0.300 seconds startup time!\nWhat have we done? We “just” packaged our Microcks Java application to a platform native binary thanks to GraalVM native and Spring Boot AOT compilation.\nThis gives you a complete, platform-specific executable that removes some of the JVM drawbacks (but also benefits) and is now ideally suited for fast, frequent, and ephemeral runs of Microcks. Aside from the effects on the startup time of the application, the new native image brings the following benefits:\nA reduced image size: 109MB instead of 220MB (yes, more than 50%) A reduced surface for security attacks: a static binary prevents the dynamic injection and execution of code in Java. This new declination of Microcks (named microcks-uber native) is perfectly well-adapted for usage through testing libraries like Testcontainers. However, at the time of writing, we don’t recommend using it as a replacement for standard distribution for long-running instances. Some arguments for that: JVM-based applications still tend to have better throughput on the long run, some dynamic features like SCRIPT dispatcher are not available in this native version, and it is still very fresh.\nCommunity amplification Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\nJosh Long 🙏 for this fantastic Coffee + Software Livestream reloaded, we’ve recorded together to demo our Testcontainers support and Spring AOT features, Apoorva64 🙏 for his numerous contributions like Fixes on Cors support or Documentation rendering with multi-OpenAPI files issues. We know that many others are coming 😉 Leon Nunes 🙏 from Solo.io for talking about Mocking GraphQL with Microcks at the GraphQL Bangkok event, Tsiry Sandratraina 🙏 from FluentCI for its Dagger Microcks module allowing you to integrate Microcks into your Dagger pipelines, And our own Hugo Guerrero 🙏 for telling the Microcks story of joining the CNCF at the KCD México event. What’s coming next? As usual, we will eagerly prioritize items according to community feedback. You can check and collaborate via our list of issues on GitHub and the project roadmap.\nMore than ever, we want to involve community members in design discussions and start some discussion about significant additions regarding OpenAPI callbacks, webhooks and AsyncAPI in Microcks. Please join us to shape the future!\nRemember that we are an open community, which means you, too, can jump on board to make Microcks even greater! Come and say hi! on our GitHub discussion or Discord chat 👻, send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/observability-for-microcks-at-scale/","title":"Observability for Microcks at scale","description":"Observability for Microcks at scale","searchKeyword":"","content":"As part of the upcoming 1.9.0 release of Microcks, I’m super proud to have contributed new features related to its observability and performance monitoring! As supporting the Open Source ecosystem is part of my day job at Grafana Labs, I was really excited by this collaboration with the Microcks project to put into practice the use of OpenTelemetry, a project that is also part of the CNCF.\nWhy it matters? Microcks can be used and deployed in many topologies: from ephemeral instances with few APIs \u0026amp; services to always-up-and-running instances serving complex ecosystems of APIs in large organizations. Within this wide range of use cases, Microcks can also be used in:\nShort-lived instances such as on-demand sandboxes, Quality Assurance environments, Performance testing environments. For deployments at scale, the project received the usual questions from the community:\nHow much can a single Microcks instance handle? Can I use it for very large performance testing campaigns? How does it scale? The maintainers needed to be able to provide the proper insights with the right tooling to answer these questions. The first step would be to be able to accurately measure the performance of Microcks to get a grasp of what a single instance could really deliver . In addition, some large organizations running Microcks and having it as a mainstream solution started to be concerned about the upgrades. Hence, those legit questions:\nIs the new 1.8.0 release lighter and better than the previous one? Should I upgrade my MongoDB engine for better performance? Will it bring some performance degradation? Those questions fall into the realm of continuous improvement. Therefore, the second requirement this contribution covers is understanding where errors or performance degradation could come from to facilitate code optimizations.\nMicrocks already provided Prometheus endpoints for metrics but to get deeper insights, it is necessary to also collect logs and traces. Furthermore, there needed to be a way to generate load in order to help with the capacity provisioning of Microcks instances.\nWhat’s in the box? As part of the CNCF ecosystem, it was a natural decision with the maintainers that the way to go was to provide a comprehensive integration with the OpenTelemetry initiative. OpenTelemetry is a collection of APIs, SDKs, and tools that provide an open, vendor-agnostic way to instrument, generate, collect, transform, and export telemetry data.\nIn addition to the Prometheus endpoints still present, Microcks 1.9.0 can now be deployed with OpenTelemetry instrumentation. With that configuration enabled, the metrics, logs, and distributed traces produced by Microcks can be sent via OTLP (OpenTelemetry Line Protocol) to any OpenTelemetry Collector service. Enabling this configuration is very straightforward, you just have to set two environment variables during Microcks’ deployment:\nOTEL_JAVAAGENT_ENABLED must be set to true, this activates the OpenTelemetry instrumentation with the OpenTelemetry Java Agent. OTEL_EXPORTER_OTLP_EXPORT must be set to a collector URL like [http://otel-collector.acme.com:4317](http://otel-collector.acme.com:4317). By default, it uses the OTLP/GRPC protocol. You can check the project’s OpenTelemetry documentation for more information.\nAside from the telemetry data collection, with this contribution, Microcks also provides a comprehensive Grafana dashboard. That way you get a direct digest of all the collected information with instant access to performance metrics per mock endpoints, including TPS and response time percentile information. The backends used here to store the telemetry data on which the Grafana Dashboard is built are Prometheus for the metrics, Loki for the logs, and Tempo for the traces. This enables seamless correlation of all 3 telemetry signals to analyze performance trends, discover potential issues, and identify bottlenecks.\nYou can check the project’s Grafana documentation for more information\nFinally, as generating load on Microcks can be complex for new users, we also added a benchmarking suite to Microcks 1.9.0! Easy to go for beginners, this suite allows you to simulate Virtual Users on different usage scenarios and gather raw performance metrics of your instance. Thanks to the K6 load and performance testing suite, it’s easy to run and tune to generate a load representative of your expected usage (browsing / invoking REST mocks / invoking Graph mocks / etc..)\nNote that you can use this benchmark suite without necessarily enabling the OpenTelemetry and Grafana features. You can check the project’s Benchmark documentation for more information.\nWalkthrough Want to see all of this in action? Then, go through our guided tour just below.\nWe will start hereafter with the Microcks Grafana dashboard displaying the metrics and the logs. You’ll see that we have used the popular RED method - (short for rate, error, and duration) - to structure this dashboard. This gives an overview of the performance \u0026amp; general behavior of each mock service. Users can now correlate metrics logs and traces to better understand how mocks behave. Using the timeline it is possible to narrow to problematic timeframes, focus on a small set of logs, and analyze the corresponding traces. You’ll also get the response time distribution and their percentiles.\nThe percentile panels show little dots that link to examples of traces that have a certain response time. This allows the user to isolate significant traces that represent a potentially problematic execution easily.\nOn the bottom pane of the dashboard, you get access to the latest Microcks logs. As you can see, some log lines may be enriched with a traceId by the OpenTelemetry instrumentation. If you have other services calling Microcks instrumented with OpenTelemetry, the traceId is automatically propagated and it’s then possible to jump to these trace details and get the visualization on the full end-to-end trace:\nThose traces are the ideal way to diagnose slow components within your services call sequences and check that optimizations work (BTW, Microcks now provides MongoDB optimization hints within the benchmark suite 😉). From every trace, it’s also possible to isolate the log related to a traceID to see the messages that were emitted during a span or the whole trace:\nTo get the above data and visualizations, we ran the benchmarking suite powered by K6 which launches four different scenarios simultaneously during one minute. Here’s the raw output we got below with details on executed scenarios and raw performance metrics:\n$ docker run --rm -i -e BASE_URL=${MICROCKS_BASE_URL} -e WAIT_TIME=0.1 grafana/k6:${K6_VERSION} run - \u0026lt; bench-microcks.js /\\ |‾‾| /‾‾/ /‾‾/ /\\ / \\ | |/ / / / / \\/ \\ | ( / ‾‾\\ / \\ | |\\ \\ | (‾) | / __________ \\ |__| \\__\\ \\_____/ .io execution: local script: - output: - scenarios: (100.00%) 4 scenarios, 85 max VUs, 2m45s max duration (incl. graceful stop): * browse: 20 looping VUs for 1m0s (exec: browse, gracefulStop: 30s) * invokeRESTMocks: 200 iterations for each of 40 VUs (maxDuration: 2m0s, exec: invokeRESTMocks, startTime: 5s, gracefulStop: 30s) * invokeGraphQLMocks: 100 iterations for each of 20 VUs (maxDuration: 2m0s, exec: invokeGraphQLMocks, startTime: 10s, gracefulStop: 30s) * invokeSOAPMocks: 5 iterations for each of 5 VUs (maxDuration: 2m0s, exec: invokeSOAPMocks, startTime: 15s, gracefulStop: 30s) [...] running (1m04.0s), 14/85 VUs, 10271 complete and 0 interrupted iterations browse ↓ [ 100% ] 20 VUs 1m0s invokeRESTMocks ✓ [ 100% ] 40 VUs 0m12.7s/2m0s 8000/8000 iters, 200 per VU invokeGraphQLMocks ✓ [ 100% ] 20 VUs 0m06.9s/2m0s 2000/2000 iters, 100 per VU invokeSOAPMocks ✓ [ 100% ] 5 VUs 0m16.0s/2m0s 25/25 iters, 5 per VU ✓ status code should be 200 ✓ pastryCall status is 200 ✓ eclairCall status is 200 ✓ eclairXmlCall status is 200 ✓ eclairXmlCall response is Xml ✓ millefeuilleCall status is 200 ✓ allFilmsCall status is 200 ✓ aFilmCall status is 200 ✓ aFilmFragmentCall status is 200 ✓ andrewCall status is 200 ✓ karlaCall status is 200 ✓ karlaCall body is correct ✓ laurentCall status is 500 ✓ laurentCall body is fault checks.........................: 100.00% ✓ 46385 ✗ 0 data_received..................: 132 MB 2.0 MB/s data_sent......................: 7.8 MB 122 kB/s http_req_blocked...............: avg=40.83µs min=291ns med=1.04µs max=18.4ms p(90)=5.04µs p(95)=8.37µs http_req_connecting............: avg=33.21µs min=0s med=0s max=18.35ms p(90)=0s p(95)=0s http_req_duration..............: avg=17.22ms min=1ms med=12.57ms max=782.36ms p(90)=28.2ms p(95)=36.67ms { expected_response:true }...: avg=17.21ms min=1ms med=12.57ms max=782.36ms p(90)=28.2ms p(95)=36.66ms http_req_failed................: 0.05% ✓ 26 ✗ 48709 http_req_receiving.............: avg=80.43µs min=6.5µs med=22.66µs max=29.12ms p(90)=129.29µs p(95)=235.55µs http_req_sending...............: avg=15.04µs min=1.58µs med=4.95µs max=9.27ms p(90)=22.33µs p(95)=36.83µs http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s http_req_waiting...............: avg=17.12ms min=902.5µs med=12.49ms max=782.34ms p(90)=28.08ms p(95)=36.49ms http_reqs......................: 48735 756.508316/s iteration_duration.............: avg=194.32ms min=10.5ms med=52.16ms max=5.57s p(90)=101.2ms p(95)=177.01ms iterations.....................: 10285 159.652981/s vus............................: 14 min=14 max=85 vus_max........................: 85 min=85 max=85 running (1m04.4s), 00/85 VUs, 10285 complete and 0 interrupted iterations browse ✓ [ 100% ] 20 VUs 1m0s invokeRESTMocks ✓ [ 100% ] 40 VUs 0m12.7s/2m0s 8000/8000 iters, 200 per VU invokeGraphQLMocks ✓ [ 100% ] 20 VUs 0m06.9s/2m0s 2000/2000 iters, 100 per VU invokeSOAPMocks ✓ [ 100% ] 5 VUs 0m16.0s/2m0s 25/25 iters, 5 per VU And yes, we got this impressive 756.5 hits/second with a p(90) response time of 28.2ms during the bench on a Macbook M2 with a 400MB heap! 🚀\nConclusion The Microcks user community expressed their wish to know what a single instance of Microcks is able to deliver in terms of throughput and response time. Through contribution we made Microcks ready to be fully observable, and we enabled optimization opportunities for very large-scale deployments. The project and the community users are now able to run benchmarks in autonomy to get figures and have a precise idea of what level of performance Microcks is capable of delivering.\nOn a personal note, OpenTelemetry is the second-largest CNCF project and it\u0026rsquo;s a big challenge to navigate in its ecosystem. This has been a good experience to find the nominal path to get the instrumentation, the storage of telemetry data, and visualization up and running for a real project. But, yes! I did it! 💪\n"},{"section":"Blog","url":"https://microcks.io/blog/extend-microcks-with-custom-libs/","title":"Extend Microcks with custom libs and code","description":"Extend Microcks with custom libs and code","searchKeyword":"","content":"With the recent Microcks 1.8.1 version, there’s an abundance of exciting enhancements, from improved OpenAPI references support to optimizations for seamless usage via Testcontainers. But, in my humble opinion, a standout feature demands your attention. Introduced in 1.8.0 (see #897) and now completed with 1.8.1 (see #966), Microcks brings forth a game-changer: extensibility. Discover how to tailor and customize behaviors with your code or library, elevating your Microcks experience to heights!\nDefining and helping to ship this feature was the first contribution the CNAM - the French National Healthcare System - initiated with the Microcks community.\nAs adopters with a huge patrimony of mocks, we needed a way to customize some behaviors in a very scalable way.\nCollaborating with the Microcks maintainers was an enriching experience that led to this post and a second one that will unveil more details on how we use the solution.\nThis post is written as a walkthrough, to expose Microcks extension capabilities and demonstrate them using some samples. By the end of this tour, you should be able to apply your customizations and figure out the possibilities it offers. We will also share some thoughts on whether engaging with structural customizations may be appropriate (or not).\nWithout waiting, let’s go ahead!\nExtension capabilities At the core of Microcks’ mocking engine are Dispatchers. They are the pieces of logic that allow to match incoming requests and find the appropriate response. Dispatchers are generally deduced from your API artifacts, but they can be configured explicitly.\nThe SCRIPT dispatcher is the most versatile and powerful to integrate custom dispatching logic in Microcks. The scripts can be written in Groovy propose a very familiar syntax to Javascript users and come with an impressive number of built-in util features (JSON \u0026amp; XML, URL fetching, etc). However, implementing advanced processing logic and duplicating it on several APIs and versions can be cumbersome when done in pure Groovy with simple scripts!\nThat’s where our first extension capability comes into play, allowing you to easily reuse your own or third-party libraries across all your mocks. The use cases below have never been so easy thanks to this new capability:\nParse and analyze some custom headers or message envelopes, Gather external data to enrich your response with dynamic content, Reuse rich datasets or decision engines for smarter responses, Apply custom security validation. As a complement in 1.8.1, an extension endpoint has also been added to the Asynchronous part of Microcks on what is called the async-minion. You now can integrate Java libs as well to customize behavior. The first covered use-case is security mechanism customization when accessing external brokers like Kafka. Others will soon come (like supporting different JMS implementations for example).\nExploring the demo repository We have set up a specific GitHub repository to illustrate those extension endpoints and capabilities. The https://github.com/microcks/api-lifecycle/ repository now contains an acme-lib folder holding all the resources you need to understand and play with Microcks extensions. Let’s have a look at this repository:\n$ tree === OUTPUT === . |____Dockerfile.acme |____Dockerfile.acme.minion |____README.md |____config | |____features.properties | |____application.properties |____docker-compose-acme.yml |____docker-compose-acme-async.yml |____docker-compose-mount.yml |____docker-compose-mount-async.yml |____podman-compose-mount.yml |____lib | |____acme-lib-0.0.1-SNAPSHOT.jar |____src | |____main | | |____java | | | |____org | | | | |____acme | | | | | |____lib | | | | | | |____CustomAuthenticateCallbackHandler.java | | | | | | |____Greeting.java | | |____groovy | | | |____org | | | | |____acme | | | | | |____lib | | | | | | |____GroovyGreeting.groovy As a starting point, you may check the src/main/java or src/main/groovy folders where is living our sample utilities:\norg.acme.lib.Greeting.java is just a Java class that holds a greeting logic in a greet() method, org.acme.lib.GroovyGreeting.groovy is a Groovy class that holds a greeting logic in a greet() method, org.acme.lib.CustomAuthenticateCallbackHandler.java is a Java Authentication callback handler that may be used in an OAuth authentication flow. To simplify things, those resources have been compiled and packaged into a JAR file in the lib folder.\nThis repository also contains several Dockerfile or docker-compose files that will be used to illustrate the extension of Microcks using this library. Some docker-compose files will also use the properties files from the config folder.\nMain component extension Let’s start with Microcks’ main component extension for reusing our library from the SCRIPT dispatcher.\nSimple docker-compose mount The first way of doing things is very convenient when you’re having a local evaluation of Microcks using the Docker-compose installation. The local lib folder is simply mounted within the image /deployments/lib directory and additional JAVA_* environment variables are set to load all the JARs found at this location.\nSee it in action by starting this configuration:\ndocker-compose -f docker-compose-mount.yml up -d You should have two containers running (microcks and microcks-db) at that point. You can use the application by opening your browser to http://localhost:8080 - or change the port in the compose file if already used.\nFor a simple illustration, you may use one of Microcks samples such as the Pastry API. Once loaded, you’ll need to edit the properties of the GET /pastries operation to access the section allowing you to configure the dispatching rules. Choose the SCRIPT dispatcher from the list and paste this simple script as new DIspatcher rules:\ndef java = new org.acme.lib.Greeting(); def groovy = new org.acme.lib.GroovyGreeting(); log.info java.greet(\u0026#34;World\u0026#34;) log.info groovy.greet(\u0026#34;My Dear\u0026#34;) return \u0026#34;pastries_json\u0026#34; This Groovy script will just illustrate the reuse of both the Java and Groovy classes - printing greeting information to the Microcks logs.\nOnce you have saved your changes, you can invoke the Microcks mock using a command like this one.\ncurl -X GET \u0026#39;http://localhost:8080/rest/API+Pastry+-+2.0/2.0.0/pastry\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; You may then inspect the logs of the running microcks container and see this kind of log traces:\n08:47:26.491 DEBUG 1 --- [80-exec-10] i.github.microcks.web.RestController : Found a valid operation GET /pastry with rules: def java = new org.acme.lib.Greeting(); def groovy = new org.acme.lib.GroovyGreeting(); log.info java.greet(\u0026#34;World\u0026#34;) log.info groovy.greet(\u0026#34;My Dear\u0026#34;) return \u0026#34;pastries_json\u0026#34; 08:47:27.272 INFO 1 --- [80-exec-10] i.g.m.util.script.ScriptEngineBinder : Hello World! 08:47:27.279 INFO 1 --- [80-exec-10] i.g.m.util.script.ScriptEngineBinder : Groovy My Dear! 08:47:27.279 DEBUG 1 --- [80-exec-10] i.github.microcks.web.RestController : Dispatch criteria for finding response is pastries_json Hooray! It works! 🎉 It demonstrates that Microcks can load arbitrary Java libraries and run them within your dispatching script. This sample is very basic but thanks to the huge Java ecosystem and Microcks features like request context injection and response templating, you have many possibilities!\nYou can now safely stop the containers:\ndocker-compose -f docker-compose-mount.yml down In the same way, you may want to use Podman to run the microcks container with external libs. See it in action by starting this configuration:\npodman pod create --name=pod_microcks --infra=true --share=net podman-compose --in-pod microcks -f \u0026#34;podman-compose-mount.yml\u0026#34; up -d Building a custom image Once happy with your library integration test, the next natural step would be to package everything as a custom immutable container image. That way, you can safely deploy it to your Kubernetes environments or even provide it to your developers using Microcks via the Testcontainers module.\nFor this, start writing this simple Dockerfile, extending the Microcks official image:\nFROM quay.io/microcks/microcks:1.8.1 # Copy libraries jar files COPY lib /deployments/lib ENV JAVA_OPTIONS=-Dloader.path=/deployments/lib ENV JAVA_MAIN_CLASS=org.springframework.boot.loader.PropertiesLauncher ENV JAVA_APP_JAR=app.jar We have simply reproduced what was done through the docker-compose previously: copying all the JAR files from lib and then setting JAVA environment variables. You may build your image with the acme/microcks-ext:nightly tag.\ndocker build -f Dockerfile.acme -t acme/microcks-ext:nightly . For a local test of your image, you can now run the docker-compose-acme.yml configuration:\ndocker-compose -f docker-compose-acme.yml up -d If you have run the previous “Simple docker-compose mount” step, you don’t have anything to change as you’re reusing the same database. Otherwise, load the Pastry API sample and apply the configuration of the previous section.\nInvoke your mock operations with the previous command as well and check the results in the logs:\n08:39:01.062 DEBUG 1 --- [080-exec-6] i.github.microcks.web.RestController : Found a valid operation GET /pastry with rules: def java = new org.acme.lib.Greeting(); def groovy = new org.acme.lib.GroovyGreeting(); log.info java.greet(\u0026#34;World\u0026#34;) log.info groovy.greet(\u0026#34;My Dear\u0026#34;) return \u0026#34;pastries_json\u0026#34; 08:39:01.433 INFO 1 --- [080-exec-6] i.g.m.util.script.ScriptEngineBinder : Hello World! 08:39:01.437 INFO 1 --- [080-exec-6] i.g.m.util.script.ScriptEngineBinder : Groovy My Dear! 08:39:01.438 DEBUG 1 --- [080-exec-6] i.github.microcks.web.RestController : Dispatch criteria for finding response is pastries_json Fantastic! 🚀 You now have a Microcks distribution customized with your extension available for all the mock services you will deploy!\nYou can now safely stop the containers:\ndocker-compose -f docker-compose-acme.yml down In a real Enterprise environment, it would be better to directly fetch the versioned library from an Enterprise Artifact repository like a Maven-compatible one. This would allow you to have reproducible builds of your custom image. It’s usually just a matter of adding a curl command to your Dockerfile:\n[...] RUN curl -f \u0026#34;${REPOSITORY_URL}\u0026#34;/${libname}/${version}/${libname}-${version}.jar -o ${LIBDIR}/${libname}-${version}.jar [...] Async Minion extension In this second part, we are exploring the extension capabilities of the async-minion component. It is an optional component that deals with all the Async API-related features in Microcks. We will extend it with a custom authentication callback handler for connecting to a Kafka broker.\nSimple docker-compose mount Here again, a very convenient way to start up is to use the Docker-compose installation. Contrary to the main component, the image /deployments/lib directory is already used for its purpose. So here, we will mount the local lib folder into /deployments/lib-ext. We must also to set an additional JAVA_CLASSPATH environment variable referencing this location.\nSee it in action by starting this configuration:\ndocker-compose -f docker-compose-mount-async.yml up -d In this configuration, we will have four containers running - with additional microcks-async-minion and microcks-kafka corresponding to a Kafka broker:\n$ docker ps === OUTPUT === CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5d314d3bf8b0 quay.io/microcks/microcks-async-minion:nightly \u0026#34;/deployments/run-ja…\u0026#34; 5 seconds ago Up 1 second 8080/tcp, 0.0.0.0:8081-\u0026gt;8081/tcp microcks-async-minion 052dd9777229 quay.io/microcks/microcks:nightly \u0026#34;/deployments/run-ja…\u0026#34; 6 seconds ago Up 5 seconds 0.0.0.0:8080-\u0026gt;8080/tcp, 8778/tcp, 0.0.0.0:9090-\u0026gt;9090/tcp, 9779/tcp microcks 5ec66cc0910d mongo:3.6.23 \u0026#34;docker-entrypoint.s…\u0026#34; 6 seconds ago Up 5 seconds 27017/tcp microcks-db ca98a4b0ed9e vectorized/redpanda:v22.2.2 \u0026#34;/entrypoint.sh redp…\u0026#34; 6 seconds ago Up 5 seconds 8081-8082/tcp, 0.0.0.0:9092-\u0026gt;9092/tcp, 9644/tcp, 0.0.0.0:19092-\u0026gt;19092/tcp microcks-kafka In this extension use case, our custom callback handler class (org.acme.lib.CustomAuthenticateCallbackHandler.java) is directly included in the async-minion configuration file. You may check this line of the application.properties local file.\nOur callback handler implementation just adds a Handling the callback... log message when being invoked. You may then inspect the logs of the running microcks-async-minion container and see this kind of log trace:\n2024-01-09 12:46:08,568 INFO [io.sma.rea.mes.kafka] (main) SRMSG18229: Configured topics for channel \u0026#39;microcks-services-updates\u0026#39;: [microcks-services-updates] Handling the callback... 2024-01-09 12:46:08,641 INFO [org.apa.kaf.com.sec.oau.int.exp.ExpiringCredentialRefreshingLogin] (smallrye-kafka-consumer-thread-0) Successfully logged in. Cool! 😎 We got it working here again! It demonstrates that Microcks async-minion can load arbitrary Java libraries and include them in the runtime. This sample is still basic but it happens to many more complex use cases, including specific broker implementations or future customization on mock messages sending or contract-testing process.\nYou can now safely stop the containers:\ndocker-compose -f docker-compose-mount-async.yml down Building a custom image Finally, you may want to package a custom immutable container image for easily distributing this extended async-minion component.\nFor this, start writing this simple Dockerfile, extending the Microcks official image. Notice that here, we can reuse the /deployments/lib location as we’re not going to replace existing libs but augment them with our acme-lib-0.0.1-SNAPSHOT.jar file.\nFROM quay.io/microcks/microcks-async-minion:1.8.1 # Copy libraries jar files COPY lib /deployments/lib ENV JAVA_CLASSPATH=/deployments/*:/deployments/lib/* We have also set the JAVA_CLASSPATH to force the discovery of this new JAR file. You may then build your image with the acme/microcks-async-minion-ext:nightly tag.\ndocker build -f Dockerfile.acme.minion -t acme/microcks-async-minion-ext:nightly . For a local test of your image, you can now run the docker-compose-acme-async.yml configuration:\ndocker-compose -f docker-compose-acme-async.yml up -d If you have run the previous “Simple docker-compose mount” step, you know how our custom callback handler is misconfigured and what is supposed to do 😉\nCheck the results in the async-minion component logs:\n2024-01-09 09:09:22,399 INFO [io.sma.rea.mes.kafka] (main) SRMSG18229: Configured topics for channel \u0026#39;microcks-services-updates\u0026#39;: [microcks-services-updates] Handling the callback... 2024-01-09 09:09:22,566 INFO [org.apa.kaf.com.sec.oau.int.exp.ExpiringCredentialRefreshingLogin] (smallrye-kafka-consumer-thread-0) Successfully logged in. It’s packed! 📦 You know how to extend and package a customized Microcks distribution fully! The new container images you produced can easily be reused via our Kubernetes Helm charts or Operator.\nYou can now safely stop the containers:\ndocker-compose -f docker-compose-acme-async.yml down Wrap-up In this post, we walked through a new feature of Microcks 1.8.1 that brings extension capabilities. You’ve learned how to integrate private or third-party Java libraries to customize the behavior of Microcks during mock invocation or when integrating with external brokers.\nThese capabilities pave the way for advanced use cases like the processing of common message structures or the dynamic enrichments of datasets to produce the smartest mocks. We’ll certainly have the opportunity to delve into more details of what we’ve done at the CNAM in a future blog post 😉\nAs a final note, I’d like to add some caution when proceeding with extensions. Remember that mocks must have two important characteristics: they must be quick to set up and easy to understand. They play an important role in easing the communication between providers and consumers and building a shared knowledge of a Service interface and behavior. Going into very complex customization - you know: this dream of a universal, dynamic, automated approach for everything - can make you lose sight of these goals!\nSo stay lightweight, with easy-to-explain, clearly scoped extensions, and do not hesitate to ask for help from the Microcks community!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.8.1-release/","title":"Microcks 1.8.1 release 🚀","description":"Microcks 1.8.1 release 🚀","searchKeyword":"","content":"We are thrilled to start this New Year with a brand new Microcks release! Say hi to the 1.8.1 release of Microcks, the Open-source cloud-native tool for API Mocking and Testing 👏\nThis release embeds 54 resolved issues as we release an intermediary 1.8.1-M1 version to avoid some users waiting too much time. Here are the highlights of this release:\nOpenAPI complex structures support was asked to handle edge cases or organizations having a great maturity on OpenAPI, Uber/All-in-one architecture simplification was required to allow further enhancements in our Shift-Left strategy and Testcontainers support, Kubernetes deployments are now better managed with Helm enhancements, enabling greater customization for an enhanced GitOps approach! Let’s do a review of what’s new on each one of our highlights without delay.\nOpenAPI complex structures While pretty simple in a self-contained approach, the OpenAPI specification can unveil a lot of complexity when dealing with references! At scale, it’s often common to split your API specification information into different files describing schemas, parameters, and examples in a way that eases reuse and favors consistency.\nAs Microcks already supported simple cases, we certainly faced limits regarding the depth of dependencies or the exhaustivity of constructions.\nWith great help from the community - thanks a lot to Apoorva Srinivas Appadoo 🙏 - we successfully enriched the set supported complex structures:\nThe components of an OpenAPI schema are now fully parsed and converted. See #995, JSON Pointer is now supported to navigate example files. See #984, The discovery of dependencies is now transitive. See #986, The imported OpenAPI elements are now fully re-normalized to allow their use for validation purposes. See #1035 and #1037, References are now supported at the path/operation level. See #1034. We’re looking forward to hearing from our vibrant community if you find some other structures that may not be yet supported. During the development process of these new features, we also set up a new GitLab repository that holds some very complex features we can now support. Have a look at it if you want to check if your case can be handled.\nUber/All-in-one architecture simplification In the previous 1.8.0 release, we welcomed the Uber image: a stripped-down version of Microcks dedicated to Shift-Left scenarios and Local development approaches. We are going further with this release, extending the concept to the component holding the Async API-related features of Microcks: the Async Minion. And to bring you a lightweight experience, we had to review the way this component integrates with others.\nIn canonical Microcks architecture, the Async Minion integrates mainly using an Apache Kafka broker. This architecture presents a very nice decoupling, allowing both Sync and Async components to scale independently and to be distributed on different nodes. However, these needs make little sense when Microcks is used locally on your development machine. As a consequence, we changed the Kafka communication channel, switching to simple WebSocket communication as illustrated in the schema below:\nAs a consequence, a Kafka broker is no longer needed when you want to enable the Async API features of Microcks on your laptop! WebSocket protocol is directly supported by the new Microcks Uber Async Minion and if you’d like to mock or test some other protocols - like Amazon SQS, SNS, or even Kafka - You can, of course, bring and connect to your existing broker!\nThese essential elements can be also joined together when used in combination within our Testcontainers Module in what we call a Microcks Ensemble. An Ensemble is a simple way to configure them all together while offering a smooth and light experience. More on this in a future blog post 😉\nHelm Chart enhancements Whilst we’re improving Microcks for Shift-Left scenarios, having a top-notch deployment experience on Kubernetes is always a strong priority for us! Hence, we were very happy to welcome three contributions for this release:\nThe ability to disable Keycloak when deploying Microcks to use fewer resources when deploying on your laptop (see #1001). Many thanks to Kevin Viet 🙏 for his contribution! The customization of Kubernetes resources labels and annotations (see #1005) is important to allow standardization of Kubernetes apps in big companies. You can now label and annotate Microcks resources the way you need to meet your company’s policies. Thanks again to Kevin Viet 🙏 for it! The management of Secrets is always a tricky topic, especially when using a GitOps deployment process. Thanks to Romain Quinio 🙏, we now have a robust Helm Chart (see #1010) that can be used in combination with GitOps engines like ArgoCD to deploy Microcks like a breeze! Community and Events Reaching, interacting with, and building a strong community is one of our top priorities! For that, we decided to start a new Discord server that offers better support for real-time messaging, support forums, and team coordination around different project’ areas.\nYou can now join the community here: https://microcks.io/discord-invite\nTo those who were already chatting with us on our previous Discord chat, please make the switch! We planned to sunset our Discord chat at the end of March.\nThe last quarter of 2023 was a super-busy one with a lot of travels, conferences, and opportunities to meet passionate and enthusiastic people! We also have a lot of recordings to share then 😉\nYou’ll find below the available recordings for some of the events we speak at - unfortunately, APIDays conferences are not recorded 😥:\nGraphQL Conference 2023 was hosted in the San Francisco Bay Area. We were talking about how to Increase Your Productivity With No-Code GraphQL Mocking, Devoxx Belgium 2023 takes place in Antwerp. We explained how to Speed Up your API delivery with Microcks, AsyncAPI Tour 2023 had a step at Bangalore this year! We traveled to India 🇮🇳 for the 1st time a had a talk called Elevating Event-Driven Architecture: Boost your delivery with AsyncAPI and Microcks, Quarkus Insights is an online meetup talking everything Quarkus.io. We were invited for episode #148 to demonstrate the Microcks in Quarkus with the Microcks DevService. And even if we had a network outage during the call 😥 we recorded a second demo. Red Hat DevNation Day was an online event on December 12th. We talked here with our friend Hugo Guerrero about \u0026ldquo;API Testing and Mocking with TestContainers\u0026rdquo; (link to be published soon). There are some nice demos using Quarkus and NodeJS out there! What’s coming next? As usual, we will eagerly prioritize items according to community feedback. You can check and collaborate via our list of issues on GitHub and the project roadmap.\nMore than ever, we want to involve community members in design discussions and start some discussion about important additions regarding OpenAPI callbacks, webhooks and AsyncAPI in Microcks. Please join us to shape the future!\nRemember that we are an open community, which means you, too, can jump on board to make Microcks even greater! Come and say hi! on our GitHub discussion or Discord chat 👻, send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-on-kind/","title":"Microcks on Kind 🚢","description":"Microcks on Kind 🚢","searchKeyword":"","content":"I\u0026rsquo;m still on housekeeping duty! I went through my notes on installing Microcks on Kind and decided to refresh them. Network and Ingress configuration here is actually easier than in the Minikube setup.\nThis installation notes were ran on my Apple Mac book M2 but those steps would sensibly be the same on any Linux machine. Let\u0026rsquo;s go 🚀\nPreparation As a Mac user, I used brew to install kind. However, it is also available from several different package managers out there. You can check the Quick Start guide for that. Obviously, you\u0026rsquo;ll also need the kubectl utility to interact with your cluster.\n$ brew install kind $ kind --version kind version 0.20.0 In a dedicated folder, prepare a cluster-kind.yaml configuration file like this:\n$ cd ~/tmp $ mkdir microcks \u0026amp;\u0026amp; cd microcks $ cat \u0026gt; cluster-kind.yaml \u0026lt;\u0026lt;EOF kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: \u0026#34;ingress-ready=true\u0026#34; extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP EOF Start and configure a cluster We\u0026rsquo;re now going to start a Kube cluster. Start your kind cluster using the cluster-kind.yaml configuration file we just created before:\n$ kind create cluster --config=cluster-kind.yaml --- OUTPUT --- Creating cluster \u0026#34;kind\u0026#34; ... ✓ Ensuring node image (kindest/node:v1.27.3) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to \u0026#34;kind-kind\u0026#34; You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 Install an Ingress Controller in this cluster, we selected nginx but other options are available (see https://kind.sigs.k8s.io/docs/user/ingress).\n$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml Wait for the controller to be available:\n$ kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=90s Install Microcks with default options We\u0026rsquo;re now going to install Microcks with basic options. We\u0026rsquo;ll do that using the Helm Chart so you\u0026rsquo;ll also need the helm binary. You can use brew install helm on Mac for that.\n$ kubectl create namespace microcks $ helm repo add microcks https://microcks.io/helm $ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.127.0.0.1.nip.io --set keycloak.url=keycloak.127.0.0.1.nip.io --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --- OUTPUT --- NAME: microcks LAST DEPLOYED: Sun Dec 3 19:27:27 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.127.0.0.1.nip.io. GRPC mock service is available at \u0026#34;microcks-grpc.127.0.0.1.nip.io\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.127.0.0.1.nip.io to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Wait for images to be pulled, pods to be started and ingresses to be there:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-577874c5b6-z97zm 1/1 Running 0 73s microcks-keycloak-7477cd4fbb-tbmg7 1/1 Running 0 21s microcks-keycloak-postgresql-868b7dbdd4-8zrbv 1/1 Running 0 10m microcks-mongodb-78888fb67f-47fwh 1/1 Running 0 10m microcks-postman-runtime-5d8fc9695-kp45w 1/1 Running 0 10m $ kubectl get ingresses -n microcks --- OUTPUT --- NAME CLASS HOSTS ADDRESS PORTS AGE microcks \u0026lt;none\u0026gt; microcks.127.0.0.1.nip.io localhost 80, 443 10m microcks-grpc \u0026lt;none\u0026gt; microcks-grpc.127.0.0.1.nip.io localhost 80, 443 10m microcks-keycloak \u0026lt;none\u0026gt; keycloak.127.0.0.1.nip.io localhost 80, 443 10m Start opening https://keycloak.127.0.0.1.nip.io in your browser to validate the self-signed certificate. Once done, you can visit https://microcks.127.0.0.1.nip.io in your browser, validate the self-signed certificate and start playing around with Microcks!\nThe default user/password is admin/microcks123\nInstall Microcks with asynchronous options In this section, we\u0026rsquo;re doing a complete install of Microcks, enabling the asynchronous protocols features. This requires deploying additional pods and a Kafka cluster. Microcks install can install and manage its own cluster using the Strimzi project.\nTo be able to expose the Kafka cluster to the outside of Kind, you’ll need to enable SSL passthrough on nginx: This require updating the default ingress controller deployment:\n$ kubectl patch -n ingress-nginx deployment/ingress-nginx-controller --type=\u0026#39;json\u0026#39; \\ -p \u0026#39;[{\u0026#34;op\u0026#34;:\u0026#34;add\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/spec/template/spec/containers/0/args/-\u0026#34;,\u0026#34;value\u0026#34;:\u0026#34;--enable-ssl-passthrough\u0026#34;}]\u0026#39; Then, you have to install the latest version of Strimzi that provides an easy way to setup Kafka on Kubernetes:\n$ kubectl apply -f \u0026#39;https://strimzi.io/install/latest?namespace=microcks\u0026#39; -n microcks Now, you can install Microcks using the Helm chart and enable the asynchronous features:\n$ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.127.0.0.1.nip.io --set keycloak.url=keycloak.127.0.0.1.nip.io --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --set features.async.enabled=true --set features.async.kafka.url=kafka.127.0.0.1.nip.io --- OUTPUT --- NAME: microcks LAST DEPLOYED: Sun Dec 3 20:14:38 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.127.0.0.1.nip.io. GRPC mock service is available at \u0026#34;microcks-grpc.127.0.0.1.nip.io\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.127.0.0.1.nip.io to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Kafka broker has been deployed on microcks-kafka.kafka.127.0.0.1.nip.io. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath=\u0026#39;{.data.ca\\.crt}\u0026#39; | base64 -d \u0026gt; ca.crt Watch and check the pods you should get in the namespace:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-6ffcc7dc54-c9h4w 1/1 Running 0 68s microcks-async-minion-7f689d9ff7-ptv4c 1/1 Running 2 (40s ago) 48s microcks-kafka-entity-operator-585dc4cd45-24tvp 3/3 Running 0 2m19s microcks-kafka-kafka-0 1/1 Running 0 2m41s microcks-kafka-zookeeper-0 1/1 Running 5 (4m56s ago) 6m43s microcks-keycloak-77447d8957-fwhv6 1/1 Running 0 87s microcks-keycloak-postgresql-868b7dbdd4-pb52g 1/1 Running 0 2m43s microcks-mongodb-78888fb67f-7t2vf 1/1 Running 4 (3m57s ago) 8m2s microcks-postman-runtime-857c577dfb-d597r 1/1 Running 0 8m2s strimzi-cluster-operator-95d88f6b5-p8bvs 1/1 Running 0 16m Now you can extract the Kafka cluster certificate using kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath='{.data.ca\\.crt}' | base64 -d \u0026gt; ca.crt and apply the checks found at Async Features with Docker Compose\nStart with loading the User signed-up API sample within your Microcks instance - remember that you have to validate the self-signed certificates like in the basic install first.\nNow connect to the Kafka broker pod to check a topic has been correctly created and that you can consume messages from there:\n$ kubectl -n microcks exec microcks-kafka-kafka-0 -it -- /bin/sh --- INPUT --- sh-4.4$ cd bin sh-4.4$ ./kafka-topics.sh --bootstrap-server localhost:9092 --list UsersignedupAPI-0.1.1-user-signedup __consumer_offsets microcks-services-updates sh-4.4$ ./kafka-console-consumer.sh --bootstrap-server microcks-kafka-kafka-bootstrap:9092 --topic UsersignedupAPI-0.1.1-user-signedup {\u0026#34;id\u0026#34;: \u0026#34;eNc5TNaPlHAKa38XQA8N7HkSRHl7Yvm1\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699907417\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;g9uDUhXPOPtwK9bZYSGmqbxHAC3tTxAz\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699907428\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;kllBuhcv3kTRNg75sFxWH6HGLtSbpXwZ\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699917413\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;YE2ZAdVwSK9JLGEyLFebHxMOVfmYlzs1\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699917426\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} ^CProcessed a total of 4 messages sh-4.4$ exit exit command terminated with exit code 130 And finally, from your Mac host, you can install the kcat utility to consume messages as well. You\u0026rsquo;ll need to refer the ca.crt certificate you previsouly extracted from there:\n$ kcat -b microcks-kafka.kafka.127.0.0.1.nip.io:443 -X security.protocol=SSL -X ssl.ca.location=ca.crt -t UsersignedupAPI-0.1.1-user-signedup --- OUTPUT --- % Auto-selecting Consumer mode (use -P or -C to override) {\u0026#34;id\u0026#34;: \u0026#34;zYcAzFlRoTGvu9Mu4ajg30lr1fBa4Kah\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699827456\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;v0TkDvd1Z7RxynQvi1i0NmXAaLPzuYXE\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699827585\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;JK55813rQ938Hj50JWXy80s5KWC61Uvr\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699837416\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;MZnR6UeKVXMhJET6asTjafPpfldiqXim\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699837430\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} [...] % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 30 ^C% Delete everything and stop the cluster Deleting the microcks Helm release from your cluster is straightforward. Then you can finally delete your Kind cluster to save some resources!\n$ helm delete microcks -n microcks --- OUTPUT --- release \u0026#34;microcks\u0026#34; uninstalled $ kind delete cluster --- OUTPUT --- Deleting cluster \u0026#34;kind\u0026#34; ... Deleted nodes: [\u0026#34;kind-control-plane\u0026#34;] Happy testing!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-on-minikube/","title":"Microcks on Minikube 🧊","description":"Microcks on Minikube 🧊","searchKeyword":"","content":"As we close the year, it\u0026rsquo;s a good time for some housekeeping! On this occasion, I found some installation notes that could be worth transforming into proper blog posts or documentation. I went through my notes on installing Microcks on Minikube and decided to refresh them. It also needed to be completed with detailed information that we usually take for granted and forget to mention - such as network and Ingress configuration.\nThis installation notes were ran on my Apple Mac book M2 but those steps would sensibly be the same on any Linux machine. Let\u0026rsquo;s go 🚀\nPreparation As a Mac user, I used brew to install minikube. However, it is also available from several different package managers out there. You can also check the Getting Started guide to access direct binary downloads. Obviously, you\u0026rsquo;ll also need the kubectl utility to interact with your cluster.\n$ brew install minikube $ minikube version minikube version: v1.29.0 commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3 We use the basic, default configuration of minikube coming with the docker driver:\n$ minikube config view - driver: docker Start and configure a cluster We\u0026rsquo;re now going to start a Kube cluster. Start your minikube cluster with the defaults.\nMy default locale is french, but you\u0026rsquo;ll easily translate to your own language thanks to the nice emojis on the beginning of lines 😉\n$ minikube start --- OUTPUT --- 😄 minikube v1.29.0 sur Darwin 14.1.2 (arm64) ✨ Utilisation du pilote docker basé sur le profil existant 👍 Démarrage du noeud de plan de contrôle minikube dans le cluster minikube 🚜 Extraction de l\u0026#39;image de base... 🤷 docker \u0026#34;minikube\u0026#34; container est manquant, il va être recréé. 🔥 Création de docker container (CPUs=4, Memory=6144Mo) ... 🐳 Préparation de Kubernetes v1.26.1 sur Docker 20.10.23... 🔗 Configuration de bridge CNI (Container Networking Interface)... 🔎 Vérification des composants Kubernetes... ▪ Utilisation de l\u0026#39;image gcr.io/k8s-minikube/storage-provisioner:v5 ▪ Utilisation de l\u0026#39;image docker.io/kubernetesui/dashboard:v2.7.0 💡 Après que le module est activé, veuiller exécuter \u0026#34;minikube tunnel\u0026#34; et vos ressources ingress seront disponibles à \u0026#34;127.0.0.1\u0026#34; ▪ Utilisation de l\u0026#39;image docker.io/kubernetesui/metrics-scraper:v1.0.8 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/controller:v1.5.1 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 🔎 Vérification du module ingress... 💡 Certaines fonctionnalités du tableau de bord nécessitent le module metrics-server. Pour activer toutes les fonctionnalités, veuillez exécuter : minikube addons enable metrics-server\t🌟 Modules activés: storage-provisioner, default-storageclass, dashboard, ingress 🏄 Terminé ! kubectl est maintenant configuré pour utiliser \u0026#34;minikube\u0026#34; cluster et espace de noms \u0026#34;default\u0026#34; par défaut. You need to enable the ingress add-on if not already set by default:\n$ minikube addons enable ingress --- OUTPUT --- 💡 ingress est un addon maintenu par Kubernetes. Pour toute question, contactez minikube sur GitHub. Vous pouvez consulter la liste des mainteneurs de minikube sur : https://github.com/kubernetes/minikube/blob/master/OWNERS 💡 Après que le module est activé, veuiller exécuter \u0026#34;minikube tunnel\u0026#34; et vos ressources ingress seront disponibles à \u0026#34;127.0.0.1\u0026#34; ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/controller:v1.5.1 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 🔎 Vérification du module ingress... 🌟 Le module \u0026#39;ingress\u0026#39; est activé You can check connection to the cluster and that Ingresses are OK running the following command:\n$ kubectl get pods -n ingress-nginx --- OUTPUT --- NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-dz95x 0/1 Completed 0 26m ingress-nginx-admission-patch-5bjwv 0/1 Completed 1 26m ingress-nginx-controller-b6894599f-pml9s 1/1 Running 0 26m Install Microcks with default options We\u0026rsquo;re now going to install Microcks with basic options. We\u0026rsquo;ll do that using the Helm Chart so you\u0026rsquo;ll also need the helm binary. You can use brew install helm on Mac for that.\nThen, we\u0026rsquo;ll need to prepare the /etc/hosts file to access Microcks using an Ingress. Add the line containing microcks.m.minikube.local address. You need to declare 2 host names for both Microcks and Keycloak.\n$ cat /etc/hosts --- OUTPUT --- ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 microcks.m.minikube.local keycloak.m.minikube.local 255.255.255.255 broadcasthost ::1 localhost Now create a new namespace and do the install in this namespace:\n$ kubectl create namespace microcks $ helm repo add microcks https://microcks.io/helm $ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.m.minikube.local --set keycloak.url=keycloak.m.minikube.local --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --- OUTPUT --- NAME: microcks LAST DEPLOYED: Tue Dec 19 15:23:23 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.m.minikube.local. GRPC mock service is available at \u0026#34;microcks-grpc.m.minikube.local\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.m.minikube.local to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Wait for the images to be pulled, pods to be started and ingresses to be there:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-865b66d867-httf7 1/1 Running 0 56s microcks-keycloak-5bd7866b5f-9kr8k 1/1 Running 0 56s microcks-keycloak-postgresql-6cfc7bf6c4-qb9rv 1/1 Running 0 56s microcks-mongodb-d584889cf-wnzzb 1/1 Running 0 56s microcks-postman-runtime-5cbc478db7-rzprn 1/1 Running 0 56s $ kubectl get ingresses -n microcks --- OUTPUT --- NAME CLASS HOSTS ADDRESS PORTS AGE microcks nginx microcks.m.minikube.local 192.168.49.2 80, 443 2m4s microcks-grpc nginx microcks-grpc.m.minikube.local 192.168.49.2 80, 443 2m4s microcks-keycloak nginx keycloak.m.minikube.local 192.168.49.2 80, 443 2m4s To access the ingress from your browser, you\u0026rsquo;ll need to start the networking tunneling service of Minikube - it may ask for sudo permission depending on when you did open your latest session:\n$ minikube tunnel --- OUTPUT --- ✅ Tunnel démarré avec succès 📌 REMARQUE : veuillez ne pas fermer ce terminal car ce processus doit rester actif pour que le tunnel soit accessible... ❗ Le service/ingress microcks nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-keycloak. ❗ Le service/ingress microcks-grpc nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🏃 Tunnel de démarrage pour le service microcks. 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-grpc. ❗ Le service/ingress microcks-keycloak nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-keycloak. Start opening https://keycloak.m.minikube.local in your browser to validate the self-signed certificate. Once done, you can visit https://microcks.m.minikube.local in your browser, validate the self-signed certificate and start playing around with Microcks!\nThe default user/password is admin/microcks123\nInstall Microcks with asynchronous options In this section, we\u0026rsquo;re doing a complete install of Microcks, enabling the asynchronous protcols features. This requires deploying additional pods and a Kafka cluster. Microcks install can install and manage its own cluster using the Strimzi project.\nTo be able to expose the Kafka cluster to the outside of Minikube, you’ll need to enable SSL passthrough on nginx. This require updating the default ingress controller deployment:\n$ kubectl patch -n ingress-nginx deployment/ingress-nginx-controller --type=\u0026#39;json\u0026#39; \\ -p \u0026#39;[{\u0026#34;op\u0026#34;:\u0026#34;add\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/spec/template/spec/containers/0/args/-\u0026#34;,\u0026#34;value\u0026#34;:\u0026#34;--enable-ssl-passthrough\u0026#34;}]\u0026#39; Then, you\u0026rsquo;ll also have to update your /etc/hosts file so that we’ll can access Microcks Kafka broker using an Ingress. Add the line containing microcks-kafka.kafka.m.minikube.local and microcks-kafka-0.kafka.m.minikube.local hosts:\n$ cat /etc/hosts --- OUTPUT --- ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 microcks.m.minikube.local keycloak.m.minikube.local microcks-kafka.kafka.m.minikube.local microcks-kafka-0.kafka.m.minikube.local 255.255.255.255 broadcasthost ::1 localhost You\u0026rsquo;ll still need to have the minikube tunnel services up-and-running like in the previous section. Next, you have to install the latest version of Strimzi operator:\n$ kubectl apply -f \u0026#39;https://strimzi.io/install/latest?namespace=microcks\u0026#39; -n microcks Now, you can install Microcks using the Helm chart and enable the asynchronous features:\n$ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.m.minikube.local --set keycloak.url=keycloak.m.minikube.local --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --set features.async.enabled=true --set features.async.kafka.url=kafka.m.minikube.local --- OUTPUT --- NAME: microcks LAST DEPLOYED: Tue Dec 26 15:07:35 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.m.minikube.local. GRPC mock service is available at \u0026#34;microcks-grpc.m.minikube.local\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.m.minikube.local to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Kafka broker has been deployed on microcks-kafka.kafka.m.minikube.local. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath=\u0026#39;{.data.ca\\.crt}\u0026#39; | base64 -d \u0026gt; ca.crt Watch and check the pods you should get in the namespace (this can take a bit longer if you pull Kafka images for the first time):\n$ kc get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-5fbf679987-kzctj 1/1 Running 1 (116s ago) 4m32s microcks-async-minion-ddfc99cf5-lcs7s 1/1 Running 5 (101s ago) 4m32s microcks-kafka-entity-operator-5755ff865-f4ktn 2/2 Running 1 (114s ago) 2m37s microcks-kafka-kafka-0 1/1 Running 0 3m microcks-kafka-zookeeper-0 1/1 Running 0 4m29s microcks-keycloak-589f68fb76-xdn5w 1/1 Running 1 (4m9s ago) 4m32s microcks-keycloak-postgresql-6cfc7bf6c4-4mc79 1/1 Running 0 4m32s microcks-mongodb-d584889cf-m74mc 1/1 Running 0 4m32s microcks-postman-runtime-5d859fcdc4-zttkv 1/1 Running 0 4m32s strimzi-cluster-operator-75d7f76545-k9scj 1/1 Running 0 6m40s Now you can extract the Kafka cluster certificate using kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath='{.data.ca\\.crt}' | base64 -d \u0026gt; ca.crt and apply the checks found at Async Features with Docker Compose.\nStart with loading the User signed-up API sample within your Microcks instance - remember that you have to validate the self-signed certificates like in the basic install first.\nNow connect to the Kafka broker pod to check a topic has been correctly created and that you can consume messages from there:\n$ kubectl -n microcks exec microcks-kafka-kafka-0 -it -- /bin/sh --- INPUT --- sh-4.4$ cd bin sh-4.4$ ./kafka-topics.sh --bootstrap-server localhost:9092 --list UsersignedupAPI-0.1.1-user-signedup __consumer_offsets microcks-services-updates sh-4.4$ ./kafka-console-consumer.sh --bootstrap-server microcks-kafka-kafka-bootstrap:9092 --topic UsersignedupAPI-0.1.1-user-signedup {\u0026#34;id\u0026#34;: \u0026#34;sinHVoQvNdA3Bhl4fi57IVI15390WBkn\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599175911\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;650YIRQaB2OsG52txubYAEJfdFB3jOzh\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599175914\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;QWimzV9X1BRgIodOWoDdsP9EKtFSniDW\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599185914\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;ivMQIz7J7IXqps5yqcaVo6qvuByhviVk\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599185921\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;hEUfxuQRHHZkt9zFzMl5ti9DOIp12vpd\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599195914\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;OggnbfXX67QbfeMGXOTiOGT2BuqEPCPL\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599195926\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} ^CProcessed a total of 6 messages sh-4.4$ exit exit command terminated with exit code 130 And finally, from your Mac host, you can install the kcat utility to consume messages as well. You\u0026rsquo;ll need to refer the ca.crt certificate you previsouly extracted from there:\n$ kcat -b microcks-kafka.kafka.m.minikube.local:443 -X security.protocol=SSL -X ssl.ca.location=ca.crt -t UsersignedupAPI-0.1.1-user-signedup --- OUTPUT --- % Auto-selecting Consumer mode (use -P or -C to override) {\u0026#34;id\u0026#34;: \u0026#34;FrncZaUsQFWPlcKSm4onTrw3o0sXhMkJ\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703600745149\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;EFcTdsrMuxKJiJUUikJnnSZWaKxltfJ0\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703600745275\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;Kxqp7P75cM07SwasVcK3MIsLp5oWUD52\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703600755112\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;p2c3SbFoGflV4DzjsyA8cLqCsCZQ96fC\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703600755117\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} [...] % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 106 ^C% Delete everything and stop the cluster Deleting the microcks Helm release from your cluster is straightforward. Then you can finally stop your Minikube cluster to save some resources!\n$ helm delete microcks -n microcks --- OUTPUT --- release \u0026#34;microcks\u0026#34; uninstalled $ minikube stop --- OUTPUT --- ✋ Nœud d\u0026#39;arrêt \u0026#34;minikube\u0026#34; ... 🛑 Mise hors tension du profil \u0026#34;minikube\u0026#34; via SSH… 🛑 1 nœud arrêté. Happy testing!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.8.0-release/","title":"Microcks 1.8.0 release 🚀","description":"Microcks 1.8.0 release 🚀","searchKeyword":"","content":"As the seasons transition, we are excited to unveil the 1.8.0 release of Microcks, the CNCF\u0026rsquo;s open-source cloud-native tool for API Mocking and Testing, right on the cusp of winter! ❄️ 🚀\nWith 47 resolved issues and 5 external PR (from new contributors) - This new release brings you a wave of new features, including AI Copilot, support for HAR artifacts, OAuth2 secured endpoint testing, Microcks super light Uber image, Testcontainers official module, two developer-friendly buttons for easy interactions (Copy as curl command and Add to your CI/CD), and an enhanced contextual help.\nWithout further ado, let\u0026rsquo;s review the latest updates for each of our key highlights.\nOpen to the community! This new 1.8.0 release is the first one since the project entered the Cloud Native Computing Foundation just before the Summer. As a new Sandbox project, we’ve gone through our onboarding process and worked hard to set the bar high regarding Open Source \u0026amp; Community best practices! As a result, this release of Microcks is the first where the trademark and source code copyright entirely belong to the CNCF.\nThis trademark update makes the contributions and governance guarantees transparent and aligned with the world\u0026rsquo;s best practices. Yet another strong confirmation of the Microcks community to drive this fantastic project the open source way! In addition, we introduced with this release:\nOpen Source Security Foundation best practice assessment: see our current assessment, Open Quality metrics with Sonar Cloud: see our current status, Open Contribution guidelines: see our guideline, An explicit Security policy: see our policy, Our Community Code of Conduct: see our code. We’re looking forward to growing our vibrant community as a radically transparent, diverse, inclusive and harassment-free space for collaborating on the future of cloud-native API testing! ☁️\nOpen to streamlined usages! This 1.8.0 release also brings its batch of functional improvements to ease your life working in this multi-protocol API ecosystem. Because working with API can happen in a number of situations, workflows and in a rich ecosystem of solutions, we worked hard to have Microcks being relevant and easier to use in those situations.\nIntroducing AI Copilot Adding samples to an existing OpenAPI specification or Postman Collection - so that Microcks can handle mocks for you - can sometimes be tedious or boring. Microcks now make life easier, integrating generative AI for this! Simply tap the \u0026ldquo;AI Copilot\u0026rdquo; button, and we\u0026rsquo;ll promptly produce compliant and contextually relevant samples for REST, GraphQL, and AsyncAPI!\n🚨 Exciting #AI features are coming to Microcks! 🧠\nWatch our early prototype in action! See how to boost your #API development lifecycle with our AI Copilot on API #mocking and #testing workflows! pic.twitter.com/Zeoc07qaBB\n\u0026mdash; Microcks (@microcksio) July 5, 2023 This feature leverages OpenAI GPT models API underhood as a first implementation. We designed it to be adaptable and ready for other AI engines in the future. Check this issue for more information.\nSupport for HAR artifacts We heard many people capture live traffic to re-inject this data into their OpenAPI or Postman Collection files and reuse them as mock definitions. During this process, they curate the recording list and have to develop/automate the transformation of their captured data to other formats. We want to ease their pain and propose a more straightforward way using HTTP Archive Format (HAR).\nLots of proxy tooling already use this format as an export format (mitmproxy, Chromium-based browsers, Postman, Insomnia, Proxyman, Fiddler, Gatling, Selenium, GitLab HAR recorder, etc.), so it is easy to integrate with most existing recording solutions.\nThat way, we think we’ve got Linux philosophy as it’s best, supporting the following flow:\nLet specialized tooling do the capture/recording and exporting of traffic using HAR format. Optionally curate the recorded content to remove noise and inappropriate data Integrate the resulting HAR into Microcks to reuse the captures as mock sources. Using HAR in Microcks as a primary or secondary artifact is pretty straightforward. You only have to add a specific comment to your file to tell Microcks the name and version of the API it relates to. Check our documentation.\nTest OAuth2 secured endpoints Testing a secured API endpoint can be tedious as it often involves retrieving and managing an access token for the target endpoint. Luckily, if you secure your API endpoint using OAuth2, Microcks can now handle this burden!\nWe’ve introduced the support of OAuth2 Client Credentials Flow, Refresh Token Rotation and Resource Owner Password Flow so that you have to provide your OAuth client information to Microcks, and it will handle the authorization flow for you - taking care of transmitting the retrieved token to your API or services under test.\nThis feature is now enabled via the Microcks UI, API, command line tool microcks-cli, and the different CI/CD integration. Check our documentation.\nUser experience enhancements A simple button can bring considerable usability improvement! We scratched our heads and delivered four noticeable enhancements to Microcks’ UI:\nWe now have a “Copy as curl command” button to ease playing with mocks, We added an “Add to your CI/CD” button that generates the code to paste into your CI/CD pipeline. It works for GitHub Actions, GitLab CI, Jenkins pipelines, Tekton pipelines and our own CLI, We added contextual help when launching a new test from the Microcks UI. It’s now way easier to figure out the correct syntax for AsyncAPI or to set the suitable security options, We augmented and curated the online “Help Box” which displays help on the most common features. See those features in action in the screenshots below. Click on screenshots to access the whole image and get details.\nOpen to Shift-Left eXperiences! We\u0026rsquo;ve been looking at integrating Microcks into Inner Loop of Shift-Left scenarios for a long time. Fast bootstrap time and very light weight are critical to achieve this and provide a smooth developer’s experience. Unfortunately, we were not able to do that with previous versions of Microcks.\nWelcome Uber image Things have changed this Summer, and we’re pleased to announce a new dedicated container image called microcks-uber! microcks-uber can be considered a stripped-down distribution of Microcks, following the same lifecycle but providing the essential services in a single container. How to run microcks-uber? It is as simple as running this command line:\ndocker run -p 8585:8080 -it quay.io/microcks/microcks-uber:1.8.0 Putting together this new container image has also brought a lot of enhancement to regular Microcks container images. We have some wonderful achievements out there:\nWe decreased the size of the image by 30MB (that\u0026rsquo;s close to 12%) We reduced the number of CVEs by 18 (that\u0026rsquo;s close to 35%) The startup time of the container is now 2.2 sec instead of 2.7 sec on my machine (that\u0026rsquo;s close to 20%) The memory consumption has also decreased as we\u0026rsquo;re loading way fewer classes in Heap and Metaspace As the Uber distribution of Microcks is perfectly well-adapted for a quick evaluation, we don’t recommend running it in production! It doesn’t embed the authorization/authentication features provided by Keycloak and the performance guarantees offered by an external MongoDB instance. The original purpose of this Uber distribution is for use with testing libraries like Testcontainers.\nWelcome Testcontainers 🧊 A direct illustration of the benefits of microcks-uber is its usage from the trendy Testcontainers library! Microcks now provides official modules for Testcontainers via a partnership with AtomicJar, the company behind this fantastic library! You can find information on the official module on Testcontainers Microcks page.\nHow does it feel to use Microcks from Testcontainers? Well, it is pretty straightforward! From your unit tests, you have to start a `MicrocksContainer, and you have a ready-to-use ephemeral instance of Microcks for mocking your dependencies or contract-testing your API:\nconst microcks = await new MicrocksContainer().start(); As of today, we provide support for the following languages:\nJava ☕️ - See our GitHub repository - See our demo application for Spring Boot 🍃 and for Quarkus. NodeJS - See our GitHub repository Go is coming soon, and we will be happy to have community contributions for Python and other libraries 😎 You want to become a Microcks maintainer? Just join us on GitHub 🙌 Open to contributions! Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\nJosh Long 🙏 for this fantastic Coffee + Software Livestream we’ve recorded together at Devoxx Belgium; and a big shout out to Sebi 🙏 for connecting people! Mathieu Amblard 🙏 for its contribution to our Testcontainers Java module regarding a JSON serialization issue, Apoorva Srinivas 🙏 for its fix of Absolute URL location override issue, Erik Pragt 🙏 for Replacing JavaFaker with fresher DataFaker contribution. It’s great to keep updated libraries! Ritesh Shergill 🙏 for his excellent article Mock API Testing with Microcks: Rock your API world with Real world tests, proposing a walkthrough on Microcks, And a special shout out to Ludovic Pourrat 🙏 for its ApiDays London on Why API Metrics matter in APIOps? Ludovic explains how Lombard Odier injects production performance metrics into Microcks to better simulate API real-life behavior. He also explains how our Conformance metrics become one of their lead indicators or API health! 💪 What’s coming next? As usual, we will eagerly to prioritize items according to community feedback.You can check and collaborate via our list of issues on GitHub and the project roadmap\nRemember that we are an open community, which means you, too, can jump on board to make Microcks even greater! Come and say hi! on our GitHub discussion or Discord chat 🐙, send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-joining-cncf-sandbox/","title":"Microcks Joining CNCF as a Sandbox Project 🎉","description":"This is meta description sandbox here","searchKeyword":"","content":"We are excited to announce that Microcks, the open-source API mocking and testing project, has been accepted as a Sandbox project by the Cloud Native Computing Foundation (CNCF).\nWe thank the CNCF Technical Oversight Committee (TOC) members for their validation and the TAG App Delivery team for their invaluable support during this process 🙏 Josh Gavant, Abigail (Abby) Bangser, Scott Rigby, Colin Griffin\u0026hellip;\n👉 See our submission form and information for the full details.\nAs a Sandbox project within the CNCF, Microcks gains a significant milestone in its journey. This recognition by the CNCF, a prominent organization driving the adoption and standardization of cloud-native technologies, is a testament to Microcks\u0026rsquo; potential to contribute to the cloud-native ecosystem.e\nJoining the CNCF will provide Microcks with an enhanced platform for collaboration and innovation. It opens up opportunities to engage with a diverse community of developers, organizations, and industry experts at the forefront of cloud-native technologies. We look forward to collaborating even more with other CNCF projects, contributing our technical expertise, and exploring integration possibilities.\nWe want to express our gratitude to our vibrant community who have worked tirelessly to make Microcks a thriving project. Their passion, expertise, and unwavering commitment to the open source principles have driven Microcks\u0026rsquo; growth and helped it reach this significant milestone.\nThanks to the Postman board Abhinav Asthana and Ankit Sobti for their trust and the opportunity to join the talented Postman Open Technologies team and its office led by Jan Schenk.\nA special kudos to Kin Lane, as nothing could have been possible without your visionary, always curious, and disruptive approach to bringing Microcks on board.\n🎂 Happy birthday, Kin. This achievement is the best present 🎁 we can offer you to thank you for what you have done and for who you are 🙌\nWe are excited about the opportunities that lie ahead for Microcks as a CNCF Sandbox project. This is a fantastic community momentum, but we know it is the beginning of our level-up and the starting point of our CNCF collaboration to establish an even more robust open governance model. We are confident that this association will further accelerate the development, adoption, and impact of Microcks within the cloud-native ecosystem and are happy to welcome a diversity of contributors (new organizations and individual contributors).\nFor more information about Microcks and our participation in the CNCF, please visit https://microcks.io/ and join the community (a GitHub ⭐️ on the project is always appreciated or you can add yourself to our growing adopters list).\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.7.1-release/","title":"Microcks 1.7.1 release 🚀","description":"Microcks 1.7.1 release 🚀","searchKeyword":"","content":"The seasons follow one another and here’s a Microcks release just ready for summer ☀️. We’re proud to announce today the 1.7.1 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing.\nWith 37 resolved issues - our record for a minor release - this release is an illustration of how community-driven the roadmap is: Amazon SQS \u0026amp; SNS support and Specifications goodies as well as many enhancements came directly from user requests and contributions. Kudos to all of them 👏 and see greetings along the notes below.\nLet’s do a review of what’s new on each one of our highlights without delay.\nAWS Messaging protocols support cooked for you! This new 1.7.1 release brings support for two popular messaging protocols of the Amazon Web Services stack: SQS and SNS. These protocols are more and more used in combination for supporting various use-cases such as microservices communication, mobile notification or Edge/IoT (Internet of Things) communication backbone.\nAmazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components. As stated by the name, it is a message queuing service where one message from a queue can only be consumed by one component. Amazon Simple Notification Service (SNS) sends notifications two ways and provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications.\nAs with the other protocols, we integrate with AsyncAPI Bindings directives that you include into your AsyncAPI document to seamlessly add the SQS or SNS support for your API:\nbindings: sqs: queue: name: my-sqs-queue sns: topic: name: my-sns-topic And of course, you’re not limited to a single protocol binding! Microcks now supports eight different protocols for AsyncAPI - enabling you to reuse the same API definition on different protocols depending if you’re using messaging in the organization or at the edge for example.\nCheck out our updated Event-based API test endpoints documentation and the complete guide for both protocols has also been published. See the Amazon SQS/SNS Guide. For easier testing purposes, we also enabled the support of LocalStack. Thanks to Xavier Escudero Sabadell 🙏 for the help in designing and testing this.\nAPI Specs goodies Multi-protocol and multi-styles of API is a reality and we see it every day in the community. This new revision is also the opportunity to embed enhancements related to three different specifications we support in Microcks.\nMicrocks now supports the resolution of external references on AsyncAPI specification documents. May you embed a reference like $ref: \u0026quot;./user-signedup.json\u0026quot; into your AsyncAPI file, Microcks will follow the reference, retrieve it to add this JSON Schema to the list of your API contracts and re-reference it to be later able to use it at validation time.\nThe actual behavior has been detailed in issue #782\nGraphQL support has also been enhanced with now the support of multi-queries on different API operations. Supporting such construction allows - for example - a mobile application to perform multiple different queries in a single server roundtrip.\nThanks to Stéffano Bonaiva Batista 🙏 who shared with us his use case but also the Pull Request for implementing this in Microcks. You rock!\nFinally, gRPC protocol is not left out as we added automatic import of internal google/protobuf/*.proto libraries when not provided in your code repository. This eases the pain of repository maintainers by lowering the number of dependencies in their repo. As these libs are provided within protoc compiler, it’s safe to assume they’ll be there during the compilation of their protobuffer resources by Microcks.\nThanks to lennakai 🙏 for raising the issue and for the discussion #830 that leads to a solution\nDeployment enhancements Podman \u0026amp; Docker compose As the Podman project releases its first Generally Available version of Podman Desktop, we found the timing was right to tidy some things up and update the experience using Podman for Microcks. Thanks also to our previous release of ARM 64 container images, we drastically simplified the usage of both Podman-compose and Docker-compose by removing redundant resources and unifying them for the three main operating systems.\nMicrocks 1.7.1 has been run successfully with latest versions of Podman-compose and now using a simple ./run-microcks.sh. The experience is now the same whatever your OS.\nDocker Desktop Extension You may have seen it some days ago as announced by our fellow Hugo Guerrero: Docker Desktop Extension 0.2is out!\nThe extension improves the Microcks experience by offering a user-friendly interface, quick access to API mock URLs, and optional integration with popular tools such as Postman. Grab it while it’s hot! 🔥 It will be updated really soon to 1.7.1.\nHelm Chart enhancements The Kubernetes installation via Helm Chart has also benefited from two enhancements: the number of desired replicas is now configurable via the values.yaml file and re-deployments are automatically triggered when a configuration change occurs.\nAs scalability and automatic redeployment was only available through the Operator, these two enhancements suggested by Sara Jarjoura 🙏 and sbr82 🙏 now allow to have a very scalable and dynamic setup of Microcks via Helm for better GitOps implementation.\nMore enhancements Some other minor enhancements that are worth be noticed:\nJSON_BODY dispatcher presence A fix has been made to allow a dispatch decision based on the presence or absence of a JSON node in the request payload (not only the presence of a value as it was before).\nThanks to Chris Belanger 🙏 for raising this issue and for the detailed analysis.\nOpenAPI specification detection We reviewed how OpenAPI is detected when importing a new artifact, leading to a more robust detection pattern that should cover more cases (especially JSON containing spaces, simple quotes, double quotes, etc..)\nThanks to Mathis Goichon 🙏 for raising this one and helping validate the fix.\nResponse templating with a parameter containing a dot This is again an issue that leads to more robust behavior of Microcks templating engine when sending parameters that may contain dots or other non-alphanumeric characters.\nThanks again to Mathis Goichon 🙏 for raising this one and helping validate the fix.\nCommunity amplification Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\nNurettin Mert Aydın 🙏 for its awesome Your Local Zero Day Collocutor: Contract Based gRPC Mocking with Microcks blog post. Awesome content!\nGreat blog post from Piotr Mińkowski 🙏 on API contract testing with Microcks \u0026amp; Quarkusio. See: Contract Testing on Kubernetes with Microcks,\nHolly Cummins 🙏 for her excellent talk on Contract testing with Pact and Quarkus, mentioning Microcks in the contract testing landscape,\nHugo Guerrero 🙏 from Red Hat for having contributed the Docker Desktop Extension v0.2 code and blog post. Well done mate! 💪\nWhat’s coming next? As usual, we will be eager to prioritize items according to community feedback: you can check and collaborate via our list of issues on GitHub and the project roadmap\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/docker-desktop-extension-0.2/","title":"Introducing Microcks Docker Desktop Extension 0.2: Enhanced Features and Increased Cadence 🚀","description":"Introducing Microcks Docker Desktop Extension 0.2: Enhanced Features and Increased Cadence 🚀","searchKeyword":"","content":"We are very excited to announce the availability of Microcks Docker Desktop Extension 0.2.0! This new version of our popular extension includes exciting enhancements designed to simplify and streamline API mocking and testing processes. This release focused on improving the user experience and expanding the extension\u0026rsquo;s capabilities. We are also excited to announce that we are increasing the frequency of our releases to provide our users with more frequent updates and features.\nOur team has worked hard to make the Microcks Docker Desktop Extension more comprehensive and robust.\nEnhanced Features The extension brings the capabilities of Microcks directly to your local development environment. It integrates seamlessly with Docker Desktop, allowing developers to mock and test APIs without complex setup or deployment quickly. The extension improves the Microcks experience by offering a user-friendly interface, quick access to API mock URLs, and optional integration with popular tools such as Postman. It enables developers to efficiently simulate API responses, test their applications, and shorten the development cycle while remaining in their familiar development environment.\nIn this release, we have introduced two major highlights:\nAPI Mock URLs in the Main Extension Page One of the most significant improvements is the inclusion of API mock URLs directly on the main extension page. This enhancement gives users quick access to mock URLs for their APIs, making sharing and integrating with other tools or team members easier. With a single glance, you can retrieve the URLs required to simulate API responses efficiently.\nFig. 1. API services on the extension’s main page. Fig 2. API mock URLs by method and path. Postman Runtime as an Optional Component We recognize that not all users use Postman for API testing. In response to popular demand and to simplify your deployment, we have added Postman as an optional component that can be activated from the Microcks Docker Desktop Extension\u0026rsquo;s settings page.\nFig 3. You can enable Posman Runtime for testing from the settings. Increased Cadence and Future Roadmap At Microcks, we are dedicated to providing you with the best possible experience and to constantly improving our tools. With the release of Microcks Docker Desktop Extension 0.2.0, we are also excited to announce an increase in our release cadence. We will deliver more frequent updates, bug fixes, and feature enhancements to ensure that you have access to the most recent capabilities and improvements.\nIn addition, we are excited to share a sneak peek at our future release roadmap. One highly anticipated feature in our pipeline is the direct API specification upload from the extension\u0026rsquo;s page. This enhancement will allow developers to upload API specifications, such as OpenAPI or GraphQL APIs, directly from the extension\u0026rsquo;s interface. By simplifying the process of importing API specifications, we aim to streamline the API mocking and testing workflows further, giving developers even more flexibility and convenience.\nWe are actively working on several exciting features for upcoming releases. We value your feedback and suggestions in shaping the future of Microcks, and we encourage you to share your thoughts and ideas.\nShare Your Feedback We rely on your feedback to improve the Microcks Docker Desktop Extension. We invite you to test the most recent release and to share your thoughts, ideas, and any issues you encounter. Join our vibrant GitHub community and interact with our team and other users. You can also participate in discussions and seek help through our Discord chat 🐙.\nMicrocks Docker Desktop Extension 0.2.0 marks a significant step forward in our mission to provide developers with a smooth and efficient API mocking and testing experience. With the addition of API mock URLs to the main extension page and the optional activation of Postman, we hope to improve the efficiency and productivity of your API development workflows.\nWe are committed to your success and will continue to provide frequent updates and improvements. Try out the latest release, provide feedback, and watch for future enhancements to empower your API development journey further. Thank you for your support and confidence in Microcks!\nDownload the 0.2.0 version of the Microcks Docker Desktop Extension to try out the new features for yourself, and keep checking back for updates as we continue to improve it and turn it into a crucial tool for your API development journey.\nHappy API mocking and testing!\n"},{"section":"Blog","url":"https://microcks.io/blog/join-adopters-list/","title":"Join the Microcks Adopters list and Empower the vibrant open source Community 🙌","description":"Join the Microcks Adopters list and Empower the vibrant open source Community 🙌","searchKeyword":"","content":"Open source software has revolutionized the way enterprises develop and deploy their applications. It fosters collaboration, innovation, and cost-effectiveness, enabling organizations to build secure and robust solutions while leveraging the collective knowledge and expertise of a vast and diverse community.\nMicrocks, the Kubernetes-Native multi-protocol open source enterprise mocking and testing API solution, is an excellent example of the power of open source projects. In this blog post, we invite enterprises and community users to join the Microcks adopters list, showcasing their support for the project and contributing to its growth.\nThe Value of Microcks Microcks simplifies the testing and development process for cloud-native APIs and applications. It provides a comprehensive set of features, including service virtualization, contract testing, and API mocking, which assist developers in building reliable and resilient applications. By adopting Microcks, enterprises can streamline their development workflows, improve quality assurance, and accelerate time-to-market.\nThe latest community user blog post from J.B. Hunt: “Mock It till You Make It with Microcks” is an excellent testimonial regarding Microcks\u0026rsquo; business value and importance:\nSee, the full post here 👉 https://microcks.io/blog/jb-hunt-mock-it-till-you-make-it/\nWhy do we need you? As an open source project, Microcks thrives on community involvement. By adding your organization\u0026rsquo;s name to the Microcks adopters list, you demonstrate your commitment to supporting our growing open source initiatives and encourage others to follow suit. Additionally, your inclusion on the list provides valuable feedback to the Microcks contributors and followers, helping them gauge the project\u0026rsquo;s adoption and impact.\nBy joining the list, you play an active role in the growth and sustainability of Microcks, enabling it to continue providing valuable tools and services to developers worldwide.\nIt\u0026rsquo;s a small contribution back to the project with a big impact to encourage and grow community adoption and contributions. Last but not least, learning from others, following the open source principles and giving back to the community is part of the maintainer\u0026rsquo;s way of living!\n👍 You like it? Please, support it by joining existing adopters 🤝\nEditing the Adopters file on GitHub To join the Microcks adopters list, follow these simple steps.\n1. Visit the Microcks GitHub repository at https://github.com/microcks/.github (Create a GitHub account if you don\u0026rsquo;t already have one):\n2. Open the ADOPTERS.md file within .github the repository:\n3. Click on the pencil icon on the top right corner of the file view to edit it:\n4. Add at the end of the file, your organization\u0026rsquo;s name (include a link to your organization\u0026rsquo;s website), your contact details along with a description of your usage and any other relevant resources.\nYou can copy/paste and modify the markdown example below:\n| [Perdu.com](https://perdu.com/) | [Yacine Kheddache](https://www.linkedin.com/in/yacinekheddache/) [Laurent Broudoux](https://github.com/lbroudoux) | Amazing cloud-native application development, API Mocking and Testing for multi-years digital transformation and modernization programs in collaboration with hundreds of developers worldwide 😎 5. Write a concise commit message summarizing your changes in the “Extended description” field, ex:\nAdd Perdu Company use case as Microck adopter\n6. Click on the “Propose changes” button to process your edit then click on the “Create pull request” button to submit it as a pull request:\nDone ✅ with thanks, follow your pull request comments (review) and await merge by the Microcks maintainers 🙌\nJoin us \u0026amp; contribute to the Microcks community 👉 Visit the GitHub repository today, make your contribution, and be part of the thriving community driving innovation and collaboration in the world of APIs.\nRespecting open-source principles also means giving back. If your organization has benefited from Microcks, please consider contributing in other ways, such as reporting issues, suggesting improvements, curating documents, giving sponsorship or even making code contributions.\nSo you are welcome to join us in making Microcks even better! Come introduce yourself in our Github discussion or Discord chat 🐙, show your support by giving us GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThank you for reading and for your support!\n"},{"section":"Blog","url":"https://microcks.io/blog/backstage-integration-launch/","title":"Microcks' Backstage integration to centralize all your APIs in a software catalog 🧩","description":"Microcks' Backstage integration to centralize all your APIs in a software catalog 🧩","searchKeyword":"","content":"Identifying and managing software assets has always been a challenge! It became more and more difficult these years with the blast of multi-cloud deployments and practices like microservices. Fortunately, Backstage comes to the rescue and tends to become a standard for developer portals. Today, we are excited to announce an integration between Microcks and Backstage to ease the management of API related assets.\nContributed to the CNCF by Spotify, Backstage is, according to their website:\n“an open platform for building developer portals. Powered by a centralized software catalog, Backstage restores order to your microservices and infrastructure and enables your product teams to ship high-quality code quickly — without compromising autonomy.”\nWe find Microcks and Backstage to be very aligned on the goals to provide a uniform approach, embracing the diversity of technical stacks and infrastructures to create a streamlined end-to-end experience and empower developers. As Microcks specializes in all kinds of API, Backstage really provides a global framework for all kinds of software assets so both fit very well in terms of strategy.\nWhat exactly is the Microcks plugin for Backstage? At the very core of Backstage is the software catalog that keeps track of ownership and metadata for all the software pieces in your ecosystem (services, website, libraries, databases, …). This catalog manages the lifecycle of entities describing those pieces. The most obvious type of entity we can find there is the Component; but it also brings the API entity type that can be used to gather metadata about OpenAPI, AsyncAPI or gRPC.\nIn Backstage, an API is somewhat very minimalistic and needs a specific metadata descriptor to be ingested and managed. That’s where Microcks can help you by providing comprehensive information about your API to Backstage and avoiding the burden of maintaining an extra file! Microcks and Backstage share this focus on the API contract for conveying API documentation. We already have all this metadata at hand in Microcks in the artifacts we’re using, no need to duplicate it!\nHence, the Microcks plugin for Backstage is in charge of connecting to one or many Microcks instances, discovering APIs and synchronizing them into the Backstage catalog. The capture below illustrates how APIs from Microcks are synchronized into Backstage.\nMetadata on an API is very lightweight in Backstage. As it turns out to be used as a developer portal, some additional information may be of interest! Basic information like specification contracts and organizational classifiers are obviously synchronized but we also add useful links so that developers can easily access the mock sandbox of the API as well as its conformance test results.\nThis is the first release and integration: a Discovery oriented plugin, but Backstage offers many more capabilities. Let us know what you’d like to see in the future!\nHow do I set up the Microcks plugin for Backstage? Nothing is easier! Well, sort of… 😉 First, you can find the Microcks plugin in the list of available plugins on the Backstage website as shown below:\nThen you can visit our GitHub repository to get access to full documentation and setup instructions. For those of you who already played with Backstage plugins, you’ll see that we stick with standards.\nAdd the microcks-backstage-provider plugin to your Backstage application with this command:\nyarn add --cwd packages/backend @microcks/microcks-backstage-provider@^0.0.2 Then, simply edit your app-config.yml file to declare one or more Microcks named providers with their synchronization configuration. See this sample below for a provider named dev :\ncatalog: providers: microcksApiEntity: dev: baseUrl: https://microcks.acme.com serviceAccount: microcks-serviceaccount serviceAccountCredentials: ab54d329-e435-41ae-a900-ec6b3fe15c54 systemLabel: domain ownerLabel: team schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, \u0026#34;human duration\u0026#34; as used in code frequency: { minutes: 2 } # supports ISO duration, \u0026#34;human duration\u0026#34; as used in code timeout: { minutes: 1 } You finally have to add the main MicrocksApiEntityProvider class to the list of available entity providers in your application and that’s it! 🎉\nJoin us and contribute to the Microcks community As stated above, this is our first integration and we’re excited about the possibilities ahead to create top-notch API developer portals. Let us know what you’d like to see in the future!\nKeep in mind that we are an open community, so you are welcome to join us in making Microcks even better! Come introduce yourself in our Github discussion or Discord chat 🐙, show your support by giving us GitHub stars ⭐️ or follow us on Twitter, Mastodon, LinkedIn, and our YouTube channel!\nThank you for reading and for your support!\n"},{"section":"Blog","url":"https://microcks.io/blog/jb-hunt-mock-it-till-you-make-it/","title":"J.B. Hunt: Mock It till You Make It with Microcks","description":"J.B. Hunt: Mock It till You Make It with Microcks","searchKeyword":"","content":"Collaboration in the enterprise has many challenges which can become pitfalls and roadblocks that threaten to slow agile software development to a complete standstill. Here, I’ll share how Microcks helped the Engineering and Technology team overcome obstacles and accelerate development and delivery at J.B. Hunt Transport Services, Inc.\nAt J.B. Hunt, it’s common for multiple software engineering teams to work in parallel across domains and products to deliver new features for our award-winning J.B. Hunt 360°® platform. Each team, or squad, is a small group of self-organizing cross-functional individuals working together to deliver a part of the product solution. Squads face collaboration challenges throughout this journey in communicating expectations, goals, implementation changes, and removing blockers. The greatest potential for roadblocks arises when one squad’s development work is dependent on another squad’s changes at our API (Application Programming Interface), microservice, and infrastructure layers.\nDelays in the delivery of microservices or APIs are a common challenge for developers. In many cases, the wait time can range from a few weeks to several months. This not only affects feedback cycles and user acceptance testing, but also impacts roadmaps and budgets. These delays can cause frustration and slow down the development process, making it important for organizations to find ways to mitigate them.\nWhat\u0026rsquo;s the problem?\nDelays in the delivery of microservices or APIs are a common challenge for developers. In many cases, the wait time can range from a few weeks to several months. This not only affects feedback cycles and user acceptance testing, but also impacts roadmaps and budgets. These delays can cause frustration and slow down the development process, making it important for organizations to find ways to mitigate them.\nAre We There Yet? This is often the scenario frontend web and mobile developers face while waiting for a complete backend architecture. Illustrated below is one such project, which was designed, developed, and delivered in 2022 to equip Carriers, those who manage a truck or fleet of trucks (tractors/trailers), with the ability to create and manage automation rules. The project enables automated fleet management tasks; for instance, Carriers can create a rule assigning a driver to all loads matching a set of defined attributes. The frontend work was dependent on developing a new API and the backend architecture, which included several Kafka-centric microservices and components.\nAs a common practice, many developers use some type of mocking or stubbing strategy, allowing them to code against a mock response. Some mocking practices work well in isolation for a single developer but fall short in meeting enterprise demands when multiple applications and developers need to be served the same response. The web and mobile squads experienced this issue while trying to work in parallel with API and backend development. We wanted both UIs to consume the same mock response so any contract changes would be available at once to both applications. The solution needed to serve mocks across the enterprise, so we turned to our internal API Special Interest Group (SIG), a self-forming team of experts passionate about the development and use of APIs at J.B. Hunt.\nWe’re Gonna Need a Bigger Boat The SIG supports an API-first strategy while advocating for an improved developer experience. Aligned with J.B. Hunt’s preference for open-source projects and a goal to better equip developers, the SIG partnered with engineering teams, security, and SRE (Site Reliability Engineering) members to deploy Microcks and make it available to developers in non-production environments.\nMicrocks is a scalable and dynamic solution where mocks are created and updated on the fly separate from any deployment. This feature lets developers quickly expose and maintain versioned and self-documenting mock API endpoints long before the real API is ready. And since Microcks is Kubernetes native and relies on Keycloak for security aspects, it aligns with our cloud-based Google Kubernetes Engine and Keycloak integrated infrastructure.\nWhy Microcks?\nMicrocks is a scalable and dynamic solution where mocks are created and updated on the fly separate from any deployment. This feature lets developers quickly expose and maintain versioned and self-documenting mock API endpoints long before the real API is ready. And since Microcks is Kubernetes native and relies on Keycloak for security aspects, it aligns with our cloud-based Google Kubernetes Engine and Keycloak integrated infrastructure.\nThe deployment process was simple; however, given J.B. Hunt’s infrastructural layout, Microcks needed extra configuration properties to work properly. Because Microcks is open source, we were able to propose a change to the deployment configurations. The Microcks primary architect welcomed the discussions, accepted the configuration change, and incorporated it into the code base. The update not only enabled J.B. Hunt to make Microcks securely available within J.B. Hunt’s development clusters, but also resolved an open issue raised in December 2021 that prevented other organizations with similar infrastructure from using Microcks in their clusters.\nSmooth Sailing at Mach-Speed Now, any developer at J.B. Hunt can instantly create mock endpoints simply by adding example request/response pairs to an OpenAPI specification and clicking the import to Microcks option. All dependent teams can continue previously blocked development work by calling the mock endpoints the tool exposes.\nYou Have Arrived at Your Destination. We Made It! Once dependent work is complete, teams easily swap out the Microcks endpoint for the actual implementation of the OpenAPI specification.\nThe developers of the project mentioned above saved at least 7 months using Microcks. They were not only able to work concurrently but also captured the exact business requirements specified by the product owner in the form of example request/response pairs. These persistent mocks can now be utilized in sandbox environments if needed.\nAccelerating development\nThe developers of the project mentioned above saved at least 7 months using Microcks. They were not only able to work concurrently but also captured the exact business requirements specified by the product owner in the form of example request/response pairs. These persistent mocks can now be utilized in sandbox environments if needed.\nStaying On the Right Track There is more we can do with Microcks now that the solution has been delivered. The OpenAPI specification can be leveraged for automated contract testing against both the mocks and the implementation during the CI/CD (continuous integration and continuous delivery) processes. Any contract breaking change introduced to either the OpenAPI specification or the implementation can trigger alerts configured to warn, stop, and rollback unexpected regressions.\nWe are just beginning to explore the ways Microcks can help us with other types of API contracts like AsyncAPI. But that will be another journey 😉\nCheck out the Scheduling Standards Consortium (SSC) to learn how J.B. Hunt is collaborating with Convoy and Uber Freight to define an API standard to drive efficiency in the supply chain industry.\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.7.0-release/","title":"Microcks 1.7.0 release 🚀","description":"Microcks 1.7.0 release 🚀","searchKeyword":"","content":"The end of the winter season ☃️ is coming. But unlike our fellow hibernators 🐻, instead of living off stores of fat, our amazing community has worked hard on yet another Microcks release - yes, version 1.7.0 is out 👏\nIn a few words, here are the highlights of this new release:\nSome new protocols \u0026amp; connectors, you asked for it so: NATS, Google PubSub, and Postman Workspace are now available,\nFeature enhancements see below for further details on issues (Script dispatcher and request context, Enhanced templating,\u0026hellip;) integrated into this release,\nTechnical upgrades to keep main components secure and up-to-date, yes we care about security 🔐 but also green 🍃\nAnd of course, some bug fixes based on community feedback 🙌\nThanks a lot to those who helped push up these significant features once again 🙏\nAs we’re entering Spring, a green leaf seems perfectly legit. But once can also see all the interconnected veins that we try to build with the ecosystem 🌍\nNew protocols \u0026amp; connector It has been a long time since we added new protocols \u0026amp; connectors. This release brings three new ones! Let’s start with the Event Driven / Asynchronous protocols.\nThis new 1.7.0 release brings support for NATS - a very low-latency message oriented middleware that meets growing adoptions in various industries like Gaming, Telco but also FinTech - and Google PubSub - the Google Cloud global and highly scalable messaging system being the backbone of Google Data Platform. We see big demands of community of these two protocols and we hope to share some nice stories soon 😉\nAs with the other protocols, we integrate with AsyncAPI Bindings directives that you include into your AsyncAPI document to seamlessly add the NATS or Google PubSub support for your API:\nbindings: nats: queue: my-nats-queue googlepubsub: topic: projects/my-project/topics/my-topic And of course, you’re not limited to a single protocol binding! Microcks now supports six different protocols for AsyncAPI - enabling you to reuse the same API definition on different protocols depending if you’re using messaging in the organization or at the edge for example.\nWhereas mocking just requires adding the binding, testing needs to be familiar with new testing endpoints syntax. Check out our updated Event-based API test endpoints documentation for that. Complete guides for both protocols have also been published. See the NATS Guide and the Google PubSub Guide. Thanks to Jonas Lagoni 🙏 for the awesome contribution on NATS.\nWith this new release, we also introduce a new connector and importer for filling your Microcks repository with API artifacts: the Postman Workspace connector. While it was previously necessary to export your Postman Collection as a file to later import it into Microcks, Jason Miesionczek 🙏 asks how we could directly integrate with collaborative workspaces from Postman to remove this extra step.\nThis new integration is now shipped into the 1.7.0 release and the best thing is that it’s totally transparent for users! Just create a new importer with a https://api.getpostman.com/collections/:collection_uuid URL pattern (or everything else that conforms to Postman Collection API 😉) and the format will be automatically detected by the importer.\nCheck out our Connect Collection workspace documentation that illustrates how to retrieve your Collection unique identifier and set up the secured connection through API keys if necessary.\nFeature enhancements Using Scripting for super-smart dispatching For a long time Microcks had the SCRIPT dispatcher that allows to define mock request dispatching logic using Groovy dynamic scripts. However, this feature was mostly unknown and reserved to SoapUI users for portability concerns. This is no longer the case as we have adapted the SCRIPT dispatcher to all the different API \u0026amp; Services types in Microcks. So you can now use it for REST but also GRPC or GraphQL as well!\nScripts can be used in Microcks to do a lot of powerful things like:\nanalyzing all the elements of a request to decide what response to return, calling an external endpoint to get dynamic information to make a dispatching decision, computing new data that may be put in the request context to be later used in the response template. Let\u0026rsquo;s see it in action with the example below that fill a conditionMsg context variable that is used later in the Paris response:\ndef weatherJson = new URL(\u0026#34;https://api.weatherapi.com/v1/current.json?q=Paris).getText() def condition = new groovy.json.JsonSlurper().parseText(weatherJson).current.condition.text requestContext.conditionMsg = \u0026#34;Today it\u0026#39;s \u0026#34; + condition return \u0026#34;Paris\u0026#34; And the Paris response that now include direct reference to the request context variable:\n{\u0026#34;city\u0026#34;: \u0026#34;Paris\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;{{ conditionMsg }}\u0026#34;} Have a look at our new documentation on Script dispatcher to check typical examples on how to use it. Thanks to Sébastien Fraigneau 🙏 for having suggested the request context features and to Dorian Brun 🙏 for having explored usage with dynamic arrays (see #751)!\nEnhanced response and message templates Response and message templating has also been enhanced to implement requirements from the community like the reuse of generated values. This led us to some refactoring in the templating engine to integrate a new \u0026gt; notation that allows for post-processing of values.\nBelow you can see a sample on how a generated identifier can be put into the request context in order to be reused later when referencing a primary object:\n{ \u0026#34;primary\u0026#34; : { \u0026#34;uuid\u0026#34; : \u0026#34;{{ guid() \u0026gt; put(my-uuid) }}\u0026#34; }, \u0026#34;reference\u0026#34;: { \u0026#34;primary-uuid\u0026#34;: \u0026#34;{{ my-uuid }}\u0026#34; } } For more information, check the put() function documentation. Also good to notice that for compatibility purposes, we now support the SoapUI notation for functions or context access within response templates. So your SoapUI ${ } notation will be translated into Microcks double-mustaches notation {{ }} automatically 😉\nTechnical upgrades Aside from features improvements, the 1.7.0 release also brings a ton of technical upgrades that aims Microcks to be more efficient and secure 🔒.\nOur Java components and associated container images now all rely on OpenJDK Java 17 and the frameworks we used have been updated to recent versions - Spring Boot 2.7.8 for the main component and Quarkus 2.13.0 for the asynchronous part. We also conduct different “CVE Hunting” campaigns to lower them at minimum.\nFeel free to check our container images size and security scan results on Quay.io, the free and open source container registry we’re using for distributing Microcks.\nOn the topic of efficiency, we also changed two major things in the Microcks 1.7.0 distribution! This release now includes Keycloak 20 that is based on the Keycloak.X distribution - announced a long time ago, but with rather recent stabilized versions - AND all our container images are now all available for ARM architecture.\nThis release of Keycloak will bring significantly faster startup time with reduced runtime footprint. Also ARM architecture is reputed to be cheaper and more efficient 🌿 especially in the cloud ☁️. Though we haven’t done any comprehensive benchmark at the moment, first feedback from community users is very enthusiastic!\nFor those of you using an external Keycloak that may not have been upgraded to the latest version, we checked the compatibility of Microcks 1.7.0 with Keycloak versions down to 14.0.\nCommunity feedback Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\nLudovic Pourrat 🙏 for its awesome talk and super kind mentions of Microcks on Adding a mock as a service capability to your API strategy portfolio on APIDays Paris December 2022,\nPedro Gute Teira 🙏, Pablo Curiel 🙏, timchase01 🙏, Sébastien Fraigneau 🙏, spencer-cheng 🙏 and many others for raising bugs or suggesting improvements.\nWhat’s coming next? As we recently announced our partnership with Postman to shape the multi-protocol API tooling future, the times ahead are really exciting! We all want the Microcks project to be a neutral and independent space with open collaboration on multi-protocol API topics.\nThen, one crucial next step for us will be to set up an open governance and host the project on a neutral foundation in order to ensure the best possible community engagement and our long-term success. We’re currently evaluating the next best actions to help with following goals:\nAllow onboarding of champions in governance through steering committee, Amplify awareness and collaboration through community calls, office hours,… Ease and increase the contributions on various topics: issues, documentation, blog posts, events,.. Remember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, LinkedIn and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-partners-with-postman/","title":"Microcks partners with Postman to shape next-gen multi-protocol API tooling ⭐️","description":"Microcks partners with Postman to shape next-gen multi-protocol API tooling ⭐️","searchKeyword":"","content":"I’m feeling proud and honored to let you know that we’re partnering with Postman, the leading platform for API development, to define the future of multi-protocol API tooling 🚀\nFrom day one, Microcks had the vision of a multi-protocol API ecosystem due to the ubiquitous nature of API. This is especially true at the enterprise level where the technology strata are built over the years. This vision has been confirmed by our growing user base and community.\nCurrently, numerous API styles and protocols coexist, requiring the need for a uniform way to accelerate and secure their delivery! Our objective remains to establish Microcks as the de-facto standard tool for delivering this unified approach.\nWe have had numerous discussions with folks at Postman lately and are excited to share that we have their full support for this mission.\nThe partnership between Microcks and Postman comes at a time when the demand for multi-protocol API tooling is rapidly increasing. As businesses continue to shift towards digital transformation, the need for efficient and reliable API tools has become more crucial than ever before.\nWhat’s changing? Through this partnership, I became an individual Postman contributor, joining the Postman Open Technologies program. I will work full-time on Microcks to pave the way for API tooling related projects.\nI’m also delighted to report that Yacine Kheddache will be joining Postman Open Technologies to help us build the Microcks project and the community. Yacine has been working in the Microcks’ shadows for years, helping me shape the strategy and the roadmap. Kudos mate! 👏\nThe Microcks team is fully committed to open source software (OSS) and the partnership will help us keep doing our job with freedom and independence. From very early discussions with Postman, the plan has always been to keep Microcks an open source project with a community driven roadmap.\nThis partnership will benefit the open source community to support the development of open source API tools. Together with Postman, we can create a stronger ecosystem benefiting developers and businesses alike.\nNext steps We all want the Microcks project to be a neutral and independent space where people may collaborate on determining the future of multi-protocol API tooling. One of the next steps for us will be to host the project on a neutral foundation in order to ensure an open governance model and long-term success. We are now in conversations with various OSS parties to determine the best approach.\nPursuing the initiated work, we want to make it standard-based, with a very pragmatic approach and integrated with tools in the API Full Lifecycle \u0026amp; the API Landscape. The challenge ahead of us is enormous, and with Postman’s assistance, we will be able to increase the team, the community and the amount of integrations to offer an even more streamlined user experience.\nIt means we want to collaborate twice as hard and even more closely with you, the Microcks community! We will require your help in developing our governance model and we hope that the opportunities ahead will encourage you to join and contribute to the project.\nIn the meantime, we would like to thank Kin Lane, Fran Mendez and Ankit Sobti for their help. Without their assistance, nothing would have been possible. On a personal note: a huge thank you to Anne for coping with all the time spent on this side-project and away from the family these last 7 years. You rock!\nThe future ahead of Microcks is bright! We can’t wait to hear from you, our vibrant community.\nLet’s celebrate! 🎉\n"},{"section":"Blog","url":"https://microcks.io/blog/docker-desktop-extension-launch/","title":"Microcks Docker Desktop Extension 🚀","description":"Microcks Docker Desktop Extension 🚀","searchKeyword":"","content":"We are excited to announce the release of Microcks\u0026rsquo; Docker Desktop Extension as we always love to support and improve the lives of our community members. 🎉\nIt has never been simpler to set up and use Microcks on a laptop or from anywhere you need or want thanks to Docker Desktop Extension. 🙌\nWithout further ado, let\u0026rsquo;s take a quick look at how it functions and what it adds.\nWhat exactly is the Docker Desktop Extension? Docker Desktop is a simple-to-install application for Mac, Windows, or Linux that allows you to create and share containerized applications and microservices. Docker Desktop includes the Docker Engine, the Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and the Credential Helper.\nBy directly integrating a variety of developer tools into your application development and deployment workflows, Docker Extensions enhance the functionality of Docker Desktop. Using the Extensions SDK, you can add debugging, testing, security, and networking features to Docker Desktop and create custom add-ons.\nBTW, the nice article by Ajeet Singh Raina lists a selection of curated Docker Desktop Extensions (thank you for mentioning Microcks in the API section 🥇): https://dev.to/docker/a-curated-list-of-docker-desktop-extensions-10k5\nIf you\u0026rsquo;d like to learn more about Docker Extensions, please consult the official Docker documentation.\nHow do I install the Microcks extension? Microcks community has reported a strong interest in facilitating installation and adoption within their development teams due to Docker Desktop\u0026rsquo;s ease of use and power on Mac and Windows.\nInstalling our new Docker Desktop Extension only takes three simple steps for Microcks users\nIf you haven\u0026rsquo;t already, download and install your Docker Desktop environment ✅\nSelect extensions ✅\nChoose Microcks, install and launch it and you are ready to go 🏆🤩\nAs we are convinced that a video is worth a million words, please see the latest entry below or on our youtube channel for both a 3 minutes demonstration video of docker extension installation and our new Direct API feature 😘\nBecause Docker Desktop on Linux runs on top of a VM if you want a native way to install Microcks in Linux and are not afraid of CLI, keep in mind that we also support Docker-compose and Podman-compose. 😇\nKudos 👏 to Hugo Guerrero 🙏 from Red Hat for having contributed to the code and doing the super nice video on ”Getting Started in 3 minutes - Docker Desktop Extension”. Well done mate! 💪\nThanks also to the Docker Desktop team for making the validation and certification process a breeze! Lucas Bernalte 🙏 was of great help to understand the details of the extension mechanism. You just rock!\nJoin us and take part in the open community of Microcks👇 Keep in mind that we are an open community, so you are welcome to join us in making Microcks even better! Come introduce yourself in our Github discussion or Discord chat 🐙, show your support by giving us GitHub stars ⭐️ or follow us on Twitter, LinkedIn, and our YouTube channel!\nThank you for reading and for your support!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.6.0-release/","title":"Microcks 1.6.0 release 🚀","description":"Microcks 1.6.0 release 🚀","searchKeyword":"","content":"We are excited to announce the 1.6.0 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. It has been an intense summer time for us as the previous 1.5.2 is just 3 months old!\nWe’re now “back to work” and happy to release many features that were requested by our community! In a few words, here are the highlights of this new release:\nGovernance is certainly a huge topic and you’ll see how Microcks can bring significant insights regarding the Tests Conformance of your API, Observability can be tightly linked to Governance as well and Microcks has new APIs to bring you functional and technical observability, And of course a lot more: Direct API concepts enhancements and a coming Docker Desktop extension among many others! Thanks a lot to those who helped push up these significant features once again. Kudos to our amazing vibrant community 👏.\nYou may notice that this release focus is a bit different than the previous ones. No new protocol, nor new API specification support added… Those were the priority of previous releases where we wanted to validate the internal model of Microcks for mocking and testing - our primary goals. This is mostly done and we can now tackle a new challenge in enhancing quality control and governance features of our solution. Expect to see more of this in the releases to come!\nThat said, let’s do a review of what’s new in 1.6.0 on each one of our highlights.\nGovernance with Test Conformance metrics and risk evaluation You probably already know that Microcks allows you to run Contract or Conformance tests against your API implementation. It helps you get confident you do not break the interface agreed upon with partners nor introduce regression.\nBut how to easily figure this out at first sight? That’s why we introduced the Conformance index and Conformance score metrics that you can now see on the top right of each API or Service details page:\nBy just checking these visual indicators, you immediately grasp if your tests are comprehensive for conformance validation and what is the current score and trend. What if you start having dozens of APIs or Services in your Microcks repository? The Microcks dashboard has evolved to display aggregated information on that too. Depending on the master level filter you’ve chosen to organize your repository, aggregated Conformance score will be computed and displayed in a tree map. Here is below an example where scores are grouped by domain:\nThis visualization will allow you to quickly spot the main Conformance risks associated with your API patrimony: bigger rectangles represent bigger groups of APIs and darker rectangles represent the less conformant APIs. You’ll probably want to chase big and dark rectangles 🎯\nThese metrics and indicators are available for ALL kinds of API! That means that you can now evaluate the risks of a patrimony whatever the API technologies it embeds. Check our documentation on Conformance metrics for more details.\nObserve all the things! Introducing the metrics we talked about earlier has led us to completely review the way we manage observability and give insights to what’s going on into a Microcks instance. We now dissociate two kinds of metrics: the Functional metrics are related to all the domain objects of Microcks and the Technical metrics that are related to resource consumption and performance.\nFor functional metrics, we introduced a bunch of new API endpoints that return JSON formatted data on how you use Microcks for invoking mocks, executing tests and so on. Here are the main endpoint categories you’ll now find in Microcks own API:\n/api/metrics/conformance/* /api/metrics/invocations/* /api/metrics/tests/* For technical metrics, we decided to expose Prometheus compliant endpoints that can be scraped to collect metrics. Because Prometheus format is now a de-facto standard within Cloud Native Computing Foundation, this was an obvious choice to allow integration of Microcks with as many monitoring tools as possible.\nCheck our full page dedicated to Monitoring \u0026amp; Observability for more details.\nMore enhancements Enhanced Direct API For a long time, Microcks had this feature previously called **Dynamic API **in Microcks: a way to generate a standard API in case you hadn’t an OpenAPI specification at hand. However this was only available for the REST API.\nIn line with our approach of managing ALL kinds of APIs, we had numerous discussions with community members on how to extend this approach to other protocols. This new release was then the opportunity to reboot this feature and rebrand it as Direct API: a way of directly generating different kinds of APIs without any specification artifact! For 1.6.0 we started to introduce the support of event-driven API through AsyncAPI of course 😉\nThat means that through a simple wizard, you can now ask Microcks to generate an event-driven API just providing an event sample using JSON. In a few seconds, you’ll have everything you need to quickly on-board: published mock messages on channels and specifications for contract testing and so on 🥷\nEvent-driven API support for Direct API includes Apache Kafka and Websocket bindings by default; but we also generate a full-blow AsyncAPI specification file with type definition that you may refine or enrich later.\nAs our engine for Direct API has been fully rebooted, event-driven API support may be just the beginning of a whole new way of bootstrapping API contracts from resources and samples 🤔 Check our Direct API documentation for more details.\nDocker Desktop Extension This is a pretty exciting new feature that will be available in a few weeks but we can’t resist briefly introducing it here 😉\nDocker Extensions power up Docker Desktop with new capabilities that can drastically simplify your provisioning and deployment workflow for development tools! We’re excited to announce that Microcks will be very soon available through Docker Extension Marketplace 🚀That means that the experience of getting started with Microcks as a standalone developer instance on your personal laptop will be simpler than ever for Docker users!\nThis new feature will deserve a full blog post on its own once available. In the meantime, you can check the extension GitHub repository if you want to have a look on what we are cooking there 🧑🍳\nBetter dashboard experience As a consequence of our work on Governance and Observability, we evolved the design of the Microcks dashboard. Experienced users typically ask for more space left to charts and analytics while newcomers typically want to focus on the “Getting Started” action buttons only.\nWe adopted an adaptive design where the dashboard evolves with the content of your Microcks instance. It only contains larger call-to-action buttons when you start with Microcks, then displays repository and mock usage analytics from the moment you have some APIs to finally reveal test and conformance metrics when you’re actually running tests. With that maturity, _“Getting Started” _buttons can be collapsed.\nThanks to Hugo Guerrero 🙏 for suggesting the enhancements and helping on the color-blind adaptation of the tree map colors 💪\nWhat’s coming next? As usual, we will be eager to prioritize items according to community feedback: you can check and collaborate via our list of issues on GitHub.\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, LinkedIn and our YouTube channel!\nThanks for reading and supporting us! ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.5.2-release/","title":"Microcks 1.5.2 release 🚀","description":"Microcks 1.5.2 release 🚀","searchKeyword":"","content":"We are delighted to announce the 1.5.2 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. This is mainly an “Enhancement release” pushing further the Microcks’ Hub and Marketplace we introduced a few weeks ago.\nIn our vision, the Hub will hold a central place that will allow Microcks users to easily reuse curated API Mocks \u0026amp; Test suites in a single click - but also to share and publish their own. That’s why we absolutely wanted to have a nice integration between the Hub and Microcks - and that’s the purpose of this release.\nBut as we have a vibrant community out there, it makes no-sense to not also embed some enhancements that were required for them. Kudos once again to all or supporters that help finding bugs 🐞, suggesting enhancements but also testing the fixes 👏. See greetings below.\nLet’s do a quick review of what’s new.\nHub integration FTW! hub.microcks.io and Microcks are now fully integrated and you can take all advantages of our new community hub and free marketplace where ever you need: on-premise, in the cloud, or go fully hybrid 👍 A new Microcks Hub menu entry is now available by default in the vertical navigation bar. Access to this new entry can of course be restricted to certain roles in your organization or totally removed if needed (by setting the microcksHub.enabled property to false).\nMicrocks samples you used to add manually as described in our Getting Started documentation or either standard APIs samples can be directly discovered and browsed from your instance.\nWhen choosing a specific API version, you have access to its detailed information. You can also directly choose to **install it **by clicking the button. From that point, you will have 2 options:\nInstall it with + Add an Import Job. This will in fact create a new automatic and scheduled import for you. So that subsequent updates of this API will be automatically propagated to your instances, Install it with a + Direct Import which means that the import will only be made once and you’ll have to re-run the install for updates. Hub integration is a very practical way to speed-up your bootstrap with Microcks but also to browse and reuse standard APIs. Please see our latest blog post regarding Microcks’ hub for further information 📖https://microcks.io/blog/microcks-hub-announcement/\nOther enhancements Postman URLs correct fallback There\u0026rsquo;s a bug when defining an API operation using Postman Collections and templates (eg. URL with /path/:param/sub for example) with FALLBACK dispatching strategy.\nWithout an exact match, Microcks tries to find the correct operation with pattern matching. It appears that the regular expression used to match the operation path and find the correct operation the mock URL is attached to was not correct.\nThanks a lot to Madiha Rehman 🙏 that found this bug and helps validating the fix (see #597).\nFix GitLab file name and references resolution As GitLab URLs are build with an encoded path and a filename that is not located at the end of URL (aka\nhttps://gitlab.com/api/v4/projects/35980862/repository/files/folder%2Fsubfolder%2Ffilename/raw?ref=branch), we realized that we cannot just extract the last part of the URL to get the file name.\nThis leads to inconsistent behavior when using Multi-artifacts support: all source artifacts being identified as raw?ref=branch, they are overwritten when importing different artifacts successively. More-over this breaks the reference resolution mechanism that also relies on simple no encoding file name and path in repository URL.\nTo get around this specific encoding we have set-up something more sophisticated so that Microcks will be well prepared to handle other encoding implementations in the future.\nThanks a lot to @imod 🙏 for reproducing scenarios and hints on how GitLab provides information on encoded filename (see #605).\nWhat’s coming next? As usual, we will be eager to prioritize items according to community feedback: you can check and collaborate via our list of issues on GitHub.\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, LinkedIn and our brand new YouTube channel!\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-hub-announcement/","title":"Microcks’ hub and marketplace!","description":"Microcks’ hub and marketplace!","searchKeyword":"","content":"We are very proud to announce the launch of Microcks’ community hub and free marketplace 👉 hub.microcks.io! This has been discussed and requested many times within our community and here we are 🙌\nThe goal of this new community website is to collect, curate and share API Mocks \u0026amp; Test suites for being installed and used within any Microcks instance in a single click.\nIf like us you like craft beers, let’s do the analogy between this announcement 👉 freely sharing API Mocks and Test suites and a brew bar!!! Close to where you live with daily free fresh, juicy, and hoppy craft beers on tap 🤩 This is exactly what Microcks Hub is providing for API development and you can enjoy it without moderation 🎉\nLet’s do a review of how it works without delay.\nWhat is it and why is it important? hub.microcks.io allows API owners aka any companies, developers, standardization organizations, regulatory committees, and product managers to easily distribute their public open API specifications in the form of ready-to-use mocks and test suites for Microcks.\nMicrocks users (API consumers here) can directly access hub.microcks.io to retrieve these API artifacts. One single click, command line or API call makes them actionable to cover and speedup many useful use-cases:\nDiscover and develop with APIs, Create sandboxes for your developers, Promote your APIs and animate and grow your API consumers community, Evaluate the impacts of an API version upgrade without deploying the new version or new product, Assess consumer and partners implementation and ensure quality assurance using an API Test and Certification Kit, Last but not least, as - in our humble opinion - a producer\u0026rsquo;s goal is to make API consumption smooth and easier, you can now dramatically help yourself to keep your promise to your consumer and improve it more responsibly 😇\nLet’s give me a real example OpenBanking.org.uk use-case As an example, you can have a look at the OpenBanking.org.uk initiative and API specifications: https://standards.openbanking.org.uk/\nWe find it being a perfect illustration of the API Test and Certification use-case. Let’s describe this use case in more detail: As a Bank or Fintech startup, I want to provide a set of APIs that respect the OpenBanking.org.uk standards.\nOf course, I can get the swagger definitions of the standard from the developer portal but how can I assess that my development team has fully understood and implemented the standard correctly, what is my level of compliance?\nThis is where Microcks and our new hub.microcks.io come to the rescue! As the OpenBanking.org.uk API owner, I can just reference my OpenAPI specs and Postman Collection using lightweight metadata so that Microcks users will be able to use it to ensure their implementation is compliant with the standard.\nFor Microcks users, it just involves 3 single steps:\nSetup (if not done already 🥇) a private internal Microcks instance, Browse hub.microcks.io to discover the API you’re interested in and import the corresponding assets that creates mocks and test suite into your instance, From Microcks, launch tests on your implementation to check conformance. As the hub wraps different kinds of artifacts, you can validate: contract syntactic rules checking OpenAPI schema conformance, OR business behavior rules using Postman Collection test scripts. It has neither been as easy to do Open Banking nor to follow the standard and regulatory requirements as using Microcks and the community API Mocks and Test suites 🚀 We love and are happy to support #fintech #startups 😘\nAnother use case from HashiCorp Terraform Enterprise is HashiCorp\u0026rsquo;s self-hosted distribution of Terraform Cloud. It offers enterprises a private instance of the Terraform Cloud application, with no resource limits and with additional enterprise-grade architectural features like audit logging and SAML single sign-on…\nWe have been in touch with some companies who are using Terraform Enterprise in production and rely on Terraform Enterprise API for their business. The issue is for each new Terraform Enterprise release or upgrade HashiCorp\u0026rsquo;s customers need to install the new version and re-test all their tooling (Terraform Enterprise API consumer tools in this case). This is time-consuming, costly, and not efficient from an automation perspective…\nSo we work hand in hand with our friends from HashiCorp to provide full Mocks and Test suites for Microcks and to share it on hub.microcks.io:\nSo now, any Terraform Enterprise customers can easily create a sandbox and test all their existing tooling on the latest release or pre-release and modify their consumer code accordingly. Last but not the least, they can integrate Microcks in their existing CI/CD pipeline to fully automate this tedious process 👍\nTerraform Enterprise mocks repository is available here: https://github.com/nehrman/terraform-enterprise\nKudos Nicolas Ehrman and the HashiCorp community for this contribution 👏\nHow does it work? Microcks leverages standard specifications and formats like Swagger (aka OpenAPI v2), OpenAPI v3, AsyncAPI, Postman Collection, GRPC/Protobuf files, GraphQL, legacy SoapUI,\u0026hellip; Adding them to the Hub is just a matter of adding some metadata using a manifest.\nTo define these metadata, we’re introducing two concepts :\nThe API Package is the top-level concept that allows you to wrap together a set of related APIs. The package can be related to an Open source project, a commercial product, or an industrial standard and it must belong to a specific business category.\nThe API Versions are simply the versioned APIs that are members of the package. The Hub will keep the history of different versions you’ll release through your package. An API Version links to your API artifacts through the property of the contract as illustrated in the schema below.\nThe community-mocks repository holds initial contributions and examples, as well as the validation materials (JSON schemas) for the metadata contributors have to provide.\nPlease check this document for further details: https://hub.microcks.io/doc/package-api-mocks\nHow to contribute an API package All the up to date information to contribute and publish your API on the Microcks hub is available here: https://hub.microcks.io/doc/how-to-contribute\nBTW, in case you ask ⁉️ hub.microcks.io is not an alternative or competitor of Postman Public Workspace. We really like the fact you can discover and play with APIs using Postman Workspaces and many Microcks users are using Postman Collections. But Enterprises need to develop effective production APIs and this is where Microcks and Postman make perfect sense together 🤝\nThe Microcks Hub contribution can be seen as a complementary step that will allow to scale your API usage. It will allow to integrate your API adoption into every possible usage scenario required by your user (on-premises or cloud-based or off-line, on-demand mocking, conformance testing, etc)\nEnthusiastic? We hope this walkthrough has made you enthusiastic about this new killer feature and API producers and consumers will join the community like FIWARE Foundation, OpenBanking.org.uk, Stet.eu, HashiCorp, and more to come: stay tuned 📢\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, LinkedIn, and our brand new YouTube channel!\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.5.1-release/","title":"Microcks 1.5.1 release 🚀","description":"Microcks 1.5.1 release 🚀","searchKeyword":"","content":"We are proud to announce the 1.5.1 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. We considered it a minor release this time as it “just” brought a new protocol binding and a lot of enhancements!\nOnce again this release is an illustration of how community-driven the roadmap is: AMQP and Swagger v2 support as well as more enhancements came directly from user requests. So thanks a lot to those who helped push up a new release with significant features once again. Kudos to all of them 👏 and see greetings below.\nAs we’re also entering the Easter season, we couldn’t resist insisting the rabbit side of things 😉\nLet’s do a review of what’s new on each one of our highlights without delay.\nAMQP/RabbitMQ, you asked for it: here it is! With tens of thousands of users, RabbitMQ 🐇 is one of the most popular open source message brokers. It uses AMQP - the Advanced Message Queuing Protocol - in version 0.9 (not to be confused with AMQP 1.0 protocol that is quite different).\nAt Microcks, we identified the importance of RabbitMQ as it appears in high priorities in previous community polls and is one choice technology in NodeJS or Java Spring communities ☕\nAs usual, we integrate with AsyncAPI Bindings directives that you include into your AsyncAPI document to seamlessly add the RabbitMQ support for your API:\nbindings: amqp: is: routingKey type: topic durable: true autoDelete: false vhost: / Of course we support queues and all the different types of exchanges for both mocking and testing.\nWhereas mocking just requires adding the binding, testing needs to be familiar with new RabbitMQ/AMQP endpoints syntax. Check out our updated Event-based API test endpoints documentation for that. Complete guide to come soon!\nSwagger v2, you asked for it: here it is too! From the origin, we didn’t support Swagger (aka OpenAPI v2) standard in Microcks as Swagger was incomplete and does not allow specifying full examples and request/response mapping. Especially:\nParameter does not allow specification of examples, Request does not allow specification of examples, Response examples cannot be named and are unique for a mime type. So from the start we supported OpenAPI v3 that does not have these limitations. And that was a nice fit for us as Microcks followed the 1 artifact == 1 API mock definition principle.\nHowever we did get feedback from the community and now are convinced that this approach can be too restrictive sometimes. A use-case that is emerging is that some people may have a single OpenAPI file containing only base/simple examples but are managing complementary/advanced examples using a Postman Collection. As a consequence, we implemented the Multi-artifacts support in release 1.3.0.\nThe thing we didn\u0026rsquo;t think about at that time is that Multi-artifacts support could also be leveraged to finally support Swagger v2 in Microcks! Allowing you to reuse your Swagger v2 contracts and related Postman Collection have direct mocking and contract-testing within Microcks. 💥\nIn a similar fashion to gRPC support or GraphQL support in Microcks you’ll first need a Swagger v2 file that will be considered as the primary artifact holding service and operation definitions and rely on a Postman Collection that holds your mock dataset as examples:\nCheck out our Swagger conventions for Microcks documentation that illustrates how Swagger v2 specification and Postman Collection can be combined and used together.\nMore enhancements Consistent behavior for Subscribe and Publish in AsyncAPI At the beginning of Microcks, we started supporting the SUBSCRIBE operations of AsyncAPI only - with Kafka binding. This because it was the most obvious thing to understand : walking in the shoes of an AsyncAPI consumer. However with more maturity and new implementations (MQTT, WebSocket) we started implementing stuff for PUBLISH operations as well, but this was not backported to Kafka and not very consistent regarding the UI.\nWe fixed this and got this little drawing below to summarize use-cases:\nWe now made everything consistent whatever the protocol you’re using. Mocking can be used by API consumers for SUBSCRIBE as well as providers for PUBLISH. Testing can be used to validate API providers for SUBSCRIBE as well as consumers for PUBLISH. Thanks to Hassen Bennour 🙏 and tom (Zulip user) 🙏 for testing it 🧪\nResolution of OpenAPI external dependencies For unknown reasons, the resolution mechanism that was used on import time for AsyncAPI spec files was not available on OpenAPI support yet. However, referencing JSON Schema files from OpenAPI files is now a very common practice. We fixed this.\nIt means that Microcks 1.5.1 will now be able to resolve your local dependencies (like in $ref: ../my-schema.json#MyRequest) as well as external ones (like in $ref: [https://acme.org/schemas//my-schema.json#MyRequest](https://acme.org/schemas//my-schema.json#MyRequest)). Thanks to Hans Peter (Zulip user) 🙏 and redben 🙏 for suggesting enhancement 😉\nCustom certificates on OpenShift When using the Kubernetes Operator to deploy on OpenShift, Routes are created to allow external access to the different Microcks services. Before the 1.5.1 release of Microcks Operator, routes were created with default settings regarding TLS so they have to reuse the default configuration for the cluster ingress controller.\nThanks to Arjun (Zulip user) 🙏 for suggesting the enhancement. You now have the ability with Microcks Operator 1.5.1 to specify custom TLS certificates for Routes either through directly putting them into the custom resource or using labels that will trigger cert-utils-operator and cert-manager certificate management services\nCommunity amplification Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\nNoelia Martín Hernández 🙏 for its awesome introduction on Kafka events mocking with AsyncAPI and Microcks in Spanish on Paradigma Digital blog, Nicolas Ehrman 🙏 and Jérôme Delabarre 🙏 from Hashicorp for a very nice chat in French regarding Microcks genesis, OpenShift Coffee Break 🙏 Red Hat team for inviting us to talk about API testing into a Microservices world with Microcks. The recording is available on YouTube too, Hugo Guerrero 🙏 from Red Hat for having contributed a super nice video on Creating Fluid API Mocks in 3 minutes on our YouTube channel. Well done mate! 💪 What’s coming next? As usual, we will be eager to prioritize items according to community feedback: you can check and collaborate via our list of issues on GitHub.\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Github discussion or Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter, LinkedIn and our brand new YouTube channel!\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.5.0-release/","title":"Microcks 1.5.0 release 🚀","description":"Microcks 1.5.0 release 🚀","searchKeyword":"","content":"We are excited to announce the 1.5.0 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. Just three months have passed after the previous iteration, and our supporters in the community helped us push up a new release with significant features once again. Thanks a lot to all of them 👏\nIn line with our mantra, this release is the evidence of our vision of a unique tool with a consistent approach for speeding up the delivery and governing the lifecycle of ALL kinds of APIs. As a result, in Microcks 1.5.0, we now support GraphQL API technology.\nAdding GraphQL allows Microcks to complete the picture and become the only and ultimate tool that supports all the different standards of APIs: REST, SOAP, gRPC, Graph, and Events based on various protocols. Moreover, we integrate with de-facto standards for API Dev Tooling and CI/CD pipelines - offering integration whatever your delivery process or tooling!\nWe also love ❤️ and value the community, and we try to serve it by listening and implementing feedback and enhancement ideas. This new release carries a lot of them regarding better lightweight and faster bootstrap experience of Microcks for different use-cases.\nLet’s review what’s new on each one of our key highlights.\nGraphQL support Various reports on API, like Postman’s State of the API Report in 2021, spotted GraphQL as one of the most exciting technologies to consider for APIs. GraphQL is an open-source data query language that is an excellent complement to REST APIs when it comes to offering flexibility to clients that can fetch exactly the data they need.\nAt Microcks, we identified the importance of GraphQL and are sure it’s a perfect fit for Microcks model and features 😉. It is also another opportunity to demonstrate one of the beauties of the great “Multi-artifacts support” feature we introduced back in Microcks 1.3.0. It allows us to unlock virtually any new protocols integration spotlessly and smoothly 💥\nWe are big supporters of the contract-first approach and rely on it. You will first need a GraphQL Schema - expressed using the Schema Definition Language - to import the operations’ definition of your API into Microcks. Because the schema doesn’t support the notion of examples - contrary to OpenAPI and AsyncAPI specifications - you will need to rely on a Postman Collection that holds your mock dataset as examples.\nCheck out our GraphQL usage for Microcks documentation that illustrates how GraphQL Schema specifications and Postman Collection can be combined and used together. You’ll see that defining mocks and tests are as easy as describing requests and responses expectations using JSON. Microcks will implement all the specificities of GraphQL fetching undercover.\nIf you are a hands-on person and need a more detailed walkthrough of available features, we recommend you also read our “GraphQL features in Microcks: what to expect?” blog post. It illustrates the mocking and testing specificities we introduced to support GraphQL queries semantics.\nBetter and lightweight developer experience One significant advantage of Microcks is its versatility. Of course, it can be installed as an “always up-and-running” central instance shared with different teams, but we also notice many other different uses throughout the community feedback. People use it on their development laptops, as ephemeral instances popped by the CI/CD pipelines or other “Mock as a Service” automations. Unfortunately for these use cases, the deployment of Microcks - especially with the asynchronous features turned on - gets a bit greedy with resources.\nFor this reason, we decided to enhance things up and make the deployment of Microcks a breeze on developers’ laptops and constrained environments concerned by bootstrap time or resource consumption. Let’s see what we got with the previous 1.4.1 version of Microcks:\n$ docker-compose -f docker-compose.yml -f docker-compose-async-addon.yml up -d [...] $ docker stats --format \u0026#34;table {{.Container}}\\t{{.Name}}\\t{{.CPUPerc}}\\t{{.MemUsage}}\u0026#34; CONTAINER NAME CPU % MEM USAGE / LIMIT 3687d032ecad microcks-async-minion 1.82% 266.2MiB / 6.789GiB 5ab9aaf5bed2 microcks 0.67% 325.1MiB / 6.789GiB 45e11517bac7 microcks-kafka 3.67% 404.1MiB / 6.789GiB cc5a005ea7ff microcks-sso 4.31% 698.9MiB / 6.789GiB 75dc0105b97d microcks-db 0.95% 137.1MiB / 6.789GiB 7f5da24afe45 microcks-zookeeper 0.53% 104.4MiB / 6.789GiB 2b9b5479d734 microcks-postman-runtime 0.00% 41.52MiB / 6.789GiB All the popped-up containers (7!) were using a total of 1975 MiB of memory. On our two-year-old MacBook Pro machine, the bootstrap time was about 40 seconds to access the UI and 45 seconds to have a first mock message published on a Kafka topic.\nWe identified two potential enhancements to make this experience leaner. First, we made the infrastructure lighter by removing Keycloak in developer mode, where users typically want administrative privileges. Then, we made the async components lighter by replacing the Strimzi Kafka cluster with a Red Panda broker that provides Kafka-compatible interfaces.\nLet now see the results using the new 1.5.0 docker-compose-devmode.yml file:\n$ docker-compose -f docker-compose-devmode.yml up -d [...] $ docker stats --format \u0026#34;table {{.Container}}\\t{{.Name}}\\t{{.CPUPerc}}\\t{{.MemUsage}}\u0026#34; CONTAINER NAME CPU % MEM USAGE / LIMIT 832548c518d3 microcks-async-minion 2.06% 243.2MiB / 6.789GiB 6641782436b5 microcks 0.52% 311.8MiB / 6.789GiB 2a95a07f1de8 microcks-postman-runtime 0.00% 38.16MiB / 6.789GiB f99c91ff63f5 microcks-kafka 24.74% 136.1MiB / 6.789GiB 5dee5cea1a6c microcks-db 0.78% 132.8MiB / 6.789GiB We now pop only five containers using a total of 860 MiB of memory. On the same MacBook Pro machine, the bootstrap time is now about 12 seconds to access the UI and 15 seconds to have a first mock message published on a Kafka topic.\nWow! We saved around 1 GiB memory - more than 50% less - and reduced the startup time by three on the same machine! Not too bad 😉 Of course, we’re open to any further enhancements in the future, and we hope this better experience will open up the doors to many new use-cases!\nMore enhancements Faster startup on Kubernetes With the latest version of Microcks, people experienced issues starting the main pod on Kubernetes in constrained environments. The container could take a long time booting up and cause Kubernetes to kill and restart the container many times. Depending on your cluster default resources allocation, it can take some time to have a healthy Microcks pod.\nWe investigated those issues with the community and identified enhancement topics:\nThe first ones were about the JVM ergonomics that haven’t been updated with the upgrade to Java 11. With new settings, the JVM is now fully aware that it runs in a container and in Kubernetes so that it can accurately auto-tune the various -X startup flags, The second one was defining a dedicated startupProbe in our Kubernetes manifest to avoid pod restarts on bootstrap without penalizing failure detections when the pod has started. These enhancements have been applied to both our Helm Chart and Operator manifests. We noticed a speed-up of 30% of the bootstrap time when we applied the enhanced version on our test clusters using the default resources constraints. The new probe avoids unintentional restarts in very constrained environments and, hence, Kubernetes scheduler saturation. We planned to publish a detailed blog post on our findings and results, so stay tuned 😉\nSecurity updates Security is undoubtedly one of our primary concerns as we know organizations use Microcks in enterprise contexts. The first task on this topic was to ensure the Log4Shell CVEs do not impact Microcks. Microcks is not using log4j directly, but we wanted to ensure that any other transitive dependencies do not include and activate it. So we ran different test suites for the Log4Shell vulnerabilities and made sure it was a no-subject for us.\nThis release also brings a lot of enhancements:\nWe updated the Jackson Library to the newest release eliminating several CVEs. See issue #53 We updated the Spring Boot framework to the latest 2.6 release with numerous dependency upgrades. See issue #536, We updated base container images to remove any known vulnerabilities at the date. See issues #517 and #518. You can also check our security scanning reports on Quay.io 😇 Performance tweaks As part of our investigations on Kubernetes startup time and frameworks upgrades, we also had an extensive work session checking the performance of Microcks. Moreover, community users report using Microcks to mock dependencies in performance testing scenarios. So they don’t want to point it out as a bottleneck!\nThanks to Miguel Chico Espin 🙏 for helping us with performance figures. You can follow our discussion on issue #540. Miguel also suggested he was able to disable some analytics for better throughput. That’s what we did in #541. Finally, as performance tweaking without observability is like going blind, we added Prometheus metrics export to our components. See issue #411.\nCommunity Community contributions are essential to us and do not come only from feature requests, bug issues, and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in a demonstration, inviting us to events, or even talking about Microcks!\nWe’d like to thank the following awesome people:\njohn873950 🙏 that contributed enhancements to our Helm Chart allowing us to add annotations or change Service type, Madiha Rehman 🙏 that found bugs regarding artifacts upload size and the use of special characters (see #525) in mock URLs (see #529), AsyncAPI Conf 🙏 team for inviting us to talk at their latest event about AsyncAPI (of course!), CloudEvents, and Microcks. The recording is available on YouTube. What’s coming next? As usual, we will be eager to prioritize items accordingly to community feedback: you can check and collaborate via our list of issues on GitHub.\nRemember that we are an open community, and it means that you too can jump on board to make Microcks even greater! Come and say hi! on our Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter and LinkedIn.\nThanks for reading and supporting us! May the beginning of 2022 keep you safe and healthy. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/graphql-features-what-to-expect/","title":"GraphQL features in Microcks: what to expect?","description":"GraphQL features in Microcks: what to expect?","searchKeyword":"","content":"In various 2021 reports, GraphQL has been spotted as one of the most exciting technologies to consider for APIs. It is a query language and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\nAt Microcks, we also identified the importance of GraphQL and thought that’s a perfect fit for Microcks model and features 😉 This post is a walkthrough of the coming GraphQL features in Microcks 1.5.0 to be released in a few weeks. It will give you insight on the GraphQL feature set we will support and how it works underneath.\nYou’ll see that GraphQL is no different from the other API standards we are supporting in Microcks like OpenAPI, AsyncAPI and gRPC. We stick to our mantra of providing a homogeneous approach whatever the technology stack, embracing diversity. But GraphQL flexibility from the consumer point of view was another opportunity to demonstrate the smartness of our engine and hence deserved this blog post.\nBefore diving into the mocking and testing features, let’s just have a quick review at what you’ll need to use them on Microcks.\nWhat you’ll need? In respect of the contract-first approach we’re big supporters of and rely on, you’ll first need a GraphQL Schema - expressed using the Schema Definition Language - to import operations definition of your API into Microcks.\nAs a GraphQL schema doesn’t support the notion of examples - contrary to OpenAPI and AsyncAPI specifications - you’ll rely on a Postman Collection that holds your mock dataset as examples.\nThanks to the multi-artifacts support feature we introduced in release 1.3.0, Microcks will be able to import both resources as primary and secondary artifacts to merge information and build a consolidated view of your GraphQL API.\nIf you need some illustration for better understanding, feel free to check out our GtiHub repository, focusing on the films* resources for our Movie Graph API - version 1.0 detailed thereafter.\nMocking GraphQL API features After having defined and imported required artifacts, let’s have a tour of different features using our Movie Graph API - version 1.0 sample.\nIntrospection Queries It\u0026rsquo;s often useful to ask a GraphQL schema for information about what queries it supports. For that, GraphQL has specified the introspection system. A system we implemented in Microcks! So once you have the mock endpoint URL of your API, you can use smart GUI clients like Insomnia to start playing around with your API and discover queries and data structure.\nField Selection and fragments At its very core, GraphQL query is about selecting a set of fields on objects. That’s obviously a feature we support in Microcks. You can issue different requests matching the same response but with different field selections : Microcks will apply filtering on response content to adapt it to the specific set required by the client.\nHere below in the capture, we redefined the set of required fields and see that the response has been filtered to fit these fields.\nFields selection can also be expressed using fragments that will centralize selection definition. Microcks supports fragment spreads and associated definitions in a transparent way. Fragments are notably very useful for the next feature…\nMulti-queries and aliases One GraphQL query can embed different queries and selections to invoke on the server side. When using this multi-queries construction, the consumer will also need to define aliases that will be reused by the provider when formatting the aggregated response. This feature is handled by Microcks mocks so that you can combine many operations within one mock invocation like illustrated below:\nArguments and variables GraphQL has the ability to pass arguments to queries or mutations. In Microcks as in the specification, these arguments can be passed either inline or using a variable notation that references a query variables element defined as JSON alongside the query.\nWhen you define a GraphQL operation that uses only GraphQL scalar types, Microcks automatically uses a new QUERY_ARGS dispatcher that analyses arguments values to match the corresponding response in your sample. This allows Microcks to have smart mock behavior to implement common queries and mutations like findFilmById or findFilmByRating or addStartToFilmWithId and so on.\nMutation with custom type You can also choose to use custom types as query or mutation arguments! Microcks will not be able to automatically infer dispatching rules in that case unfortunately. But it will allow you to define your own smart dispatcher simply, using the JSON BODY dispatcher. With this one you’ll be able to easily define an evaluation rule on the query variables JSON to return the response to the client.\nHere below you can see an example of query variables JSON that will be evaluated to return the correct Film to add review to:\nAdvanced features We already got a bunch of exciting features but it’s worth noting that some other features of Microcks are obviously still available for GraphQL mocking as well!\nWe can mention here:\nTemplating expressions and functions - so that you can include dynamic or random content into your mocks responses using notations like {{ guid() }} or {{ request.body/filmId }}, FALLBACK dispatcher if you want some complex try-catch behavior in your matching rules when dispatching to a response, SCRIPT dispatcher that offers you all the power of Groovy scripting to request dispatching (documentation to come soon). Testing GraphQL API features Besides the mocking features in Microcks, there’s always the second side of the coin: the testing features!\nTesting a GraphQL API in Microcks means that we’ll reuse the different unitary operations of your API against a Test Endpoint that represents the backend implementation of this API. For each and every example in your API, Microcks will invoke the backend and record the exchanged request and responses. Request is recorded using the HTTP POST representation of a GraphQL query ; response is recorded as is. After this recording step, Microcks will finally perform a validation step to check that returned response is conforming to the GraphQL Schema defining your API. This will allow to mark the test as passed ✅ or failed ❌.\nAs usual with other API technologies, tests in Microcks can be launched through the UI, the API, Jenkins Plugin, GitHub Action, Tekton Task or simple CLI for full automation.\nFor the technical readers, one subtle detail to notice is that in fact a GraphQL Schema does not allow direct validation of a response content. GraphQL Schema is much more like a grammar defining the range of possibilities for a response. In GraphQL, actual response validation can only be done if you precisely know what was requested by the client.\nWe think we find an elegant solution to this problem when implementing validation in Microcks. As we precisely know the request sent to the tested backend and have the GraphQL Schema at hand, Microcks is dynamically building a new JSON schema for each request - merging request information with schema information. This in-memory representation of JSON schema is then used to validate the response from the backend and be sure we’re doing validation as specific as possible. 🥳\nEnthusiastic? We hope this walkthrough has made you enthusiastic about this new set of features coming in January in Microcks 1.5.0. The best thing is that you don\u0026rsquo;t have to wait for the release to test them out!\nEverything is always present in the 1.5.x branch of the GitHub repository and in the nightly tagged container image. So starting playing with this new GraphQL support is as simple as this one-liner:\n$ git clone https://github.com/microcks/microcks.git \u0026amp;\u0026amp; cd microcks \\ \u0026amp;\u0026amp; git fetch \u0026amp;\u0026amp; git checkout 1.5.x \u0026amp;\u0026amp; cd install/docker-compose \\ \u0026amp;\u0026amp; docker-compose up -d Now just open your browser on http://localhost:8080 and connect with admin/microcks123 🚀\nAs usual, we’re eager for community feedback: come and discuss on our Discord chat 🐙\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.4.1-release/","title":"Microcks 1.4.1 release 🚀","description":"Microcks 1.4.1 release 🚀","searchKeyword":"","content":"We are thrilled to announce today the 1.4.1 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. This release is another demonstration of the ability of Microcks to play on both sides with new Enterprise related features but also enhancement towards the Developer eXperience.\nYou’ll see that we put a lot of effort (and love ❤️) into listening and implementing feedback and ideas from our community: the number of people that suggested, contributed or helped amplify Microcks reach in communities is huge!\nKudos to our community users and partners for supporting and pushing us to this momentum 👏See greetings below.\nLet’s do a review of what’s new on each one of our highlights without delay.\nAnd yes… we screwed things up on the 1.4.0 release… so we directly jump to 1.4.1 instead 😉\nRepository multi-tenancy Starting with this release, we introduce the ability to segment your APIs \u0026amp; Services repository in Microcks. This feature was critical for some heavy users of Microcks that use it for managing dozens or even hundreds of APIs within their global organization - and we’re very happy to have them on-board to validate the design and implementation early before the release date. Thanks to Romain Gil and Nicolas Matelot 🙏\nRepository multi-tenancy should be explicitly opted-in and will leverage the labels you assign to APIs \u0026amp; Services. As an example, if you define the domain label as the primary label with customer, finance and sales values, you’ll be able to define users with the manager role only for the APIs \u0026amp; Services that have been labeled accordingly. Sarah may be defined as a manager for domain=customer and domain=finance services, while John may be defined as the manager for domain=sales APIs \u0026amp; services.\nFor each and every tenant, Microcks takes care of creating and managing dedicated groups. The Microcks administrator will then be able to assign users to groups easily like illustrated below:\nHow to enable and manage a multi-tenant repository? It’s very easy! New options have been added into both Helm Chart and Operator. Check our updated documentation on activation and user groups management.\nScaling labels using new APIMetadata You may have understood that labels are an important part in multi-tenancy support but also more generally in repository organization even in a single-tenant configuration. Hence we wanted to make things easier for you to set, update and manage labels at scale. We introduced a new APIMetadata descriptor that allows you to specify:\nlabels for your API and also, operations mocking properties. This descriptor can live in your Git repository, close to your specification artifacts so that it follows the “Git as the source of truth” principle! Microcks will be able to import it repeatedly to track changes due to API lifecycle, classification, ownership or mocking behaviour. Here’s below the anatomy of such a descriptor configuring labels and operations properties automatically:\napiVersion: mocks.microcks.io/v1alpha1 kind: APIMetadata metadata: name: WeatherForecast API version: 1.1.0 labels: domain: weather status: GA team: Team C operations: \u0026#39;GET /forecast/{region}\u0026#39;: delay: 50 For more information on that feature, checkout the APIMetadata documentation. You can also embed such metadata directly into your OpenAPI or AsyncAPI specification file. Please pursue your reading to the “OpenAPI \u0026amp; AsyncAPI Specification support” section 😉\nDeveloper \u0026amp; Installation eXperiences As mentioned in the introduction, Developer eXperience was a focus on 1.4.1 and associated with installation enhancements it makes a big theme for this release!\nThe important addition of this feature is the Docker-compose support for AsyncAPI mocking and testing! It was a long time request not having to go through a Minikube or Kube cluster installation to use AsyncAPI in Microcks. Starting Microcks with AsyncAPI support and embedded Kafka broker is now as easy as:\n$ docker-compose -f docker-compose.yml -f docker-compose-async-addon.yml up -d Thanks a lot to Ilia Ternovykh 🙏 for having baked this new capability. It has been fully detailed in the Async Features with Docker Compose blog post if you want to give it a try!\nAside from this new feature, come a lot of enhancements and capabilities suggested by the community. The most noticeable one are:\nConnecting to a secured external Kafka broker (using TLS, MTLS or SCRAM) for producing mock messages, NodePort ServiceType for Helm Chart install (in alternative to regular Ingress), thanks to john873950 🙏 contrib, Resources values override for Keycloak and MongoDB, thanks to john873950 🙏 contrib, Configuration of storage classes for Keycloak and MongoDB, thanks to Mohammad Almarri 🙏 suggestion. OpenAPI \u0026amp; AsyncAPI specification support As our corner stones, we can’t release a new Microcks version without enhancing the support of these two specifications!\nThe major novelty in this release is the introduction of Microcks specific OpenAPI and AsyncAPI extensions as provided by both specifications. These extensions come in the form of x-microcks and x-microcks-operation attributes that you may insert into your specification document.\nAs an example below, x-microcks can be used at the information level to specify labels to set on your AsyncAPI (or OpenAPI) once imported into Microcks:\nasyncapi: \u0026#39;2.1.0\u0026#39; info: title: Account Service version: 1.0.0 description: This service is in charge of processing user signups x-microcks: labels: domain: authentication status: GA team: Team B You can also insert a x-microcks-operation property at the operation level (or AsyncAPI) to force some response delay or dispatching rules like below:\nopenapi: \u0026#39;3.1.0\u0026#39; [...] post: summary: Add a car to current owner description: Add a car to current owner description operationId: addCarOp x-microcks-operation: delay: 100 dispatcher: SCRIPT dispatcherRules: | def path = mockRequest.getRequest().getRequestURI(); if (!path.contains(\u0026#34;/laurent/car\u0026#34;)) { return \u0026#34;Not Accepted\u0026#34; } def jsonSlurper = new groovy.json.JsonSlurper(); def car = jsonSlurper.parseText(mockRequest.getRequestContent()); if (car.name == null) { return \u0026#34;Missing Name\u0026#34; } return \u0026#34;Accepted\u0026#34; Want to learn more about these extensions? Check our updated OpenAPI support and AsyncAPI support documentation. If embedding our extension into your spec doesn’t please you, you can still use the new APIMetadata document like explained in the “Repository organization” section 😇\nAnd of course we produced a number of fixes or enhancements thanks to user feedback that deal with edge cases of these specifications. Let mentioned:\nOne Liner OpenAPI JSON file support, Fix OpenAPI to JSON Schema structures conversion for anyOf, oneOf or allOf, discovered by Ms. Boba 🙏, Messages polymorphism using oneOf, anyOf or allOf constructions, detected by ivanboytsov 🙏. Community support Community contributions do not come only from feature requests, bug issues and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in demonstration, inviting us to events or even talking about Microcks!\nWe’d like to thank the following awesome people:\nTamimi Ahmad 🙏 that invited us to talk about Microcks at the Solace Community Lightning Talk where we had the opportunity to demonstrate our work. Recording is available on YouTube, The Solace Dev Community and Tamimi Ahmad 🙏 for working on a joint demonstration with their PubSub+ Event Portal product. The demo has been played 2 times during Solace Office Hours at Kafka Summit Americas 😉 Cloud Nord 🙏 team for inviting us to talk at their latest event. Recording to come very soon but for french folks only! Hugo Guerrero 🙏 for having two talks on Kafka Summit APAC and Americas 2021! Be sure to watch the replay of its Automated Apache Kafka Mocking and Testing with AsyncAPI. Congrats mate! 💪 What’s coming next? As usual, we will be very happy to prioritize depending on community feedback : you can check and collaborate via our list of issues on GitHub. We’ll probably also setup some more Twitter polls to get your ideas about:\nprotocol addition (AMQP and GraphQL seem to be good candidates at the moment - see #402 and #401), even easier on-boarding experience for new users (see issue #484), community sharing of mocks and tests for regulatory or industrial standards (see this repository), more metrics and analytics to govern your APIs with Microcks. Remember that we are open and it means that you can jump on board to make Microcks even greater! Come and say hi! on our Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter and LinkedIn.\nThanks for reading and supporting us! Stay safe and healthy. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/async-features-with-docker-compose/","title":"Async Features with Docker Compose","description":"Async Features with Docker Compose","searchKeyword":"","content":"For some weeks now, many users from the Microcks community were asking for playing with AsyncAPI related features without having to setup a Minikube or a full Kubernetes instance. And Docker-Compose is the perfect match for that! We were at first reluctant as it is an additional configuration to support\u0026hellip; but developers experience FTW! 💪\nThis blog post is a detailed walkthrough on how to use Asynchronous related features with Docker-Compose using the new set of compose files shipped in Microcks master branch. This configuration has also entered our Installation documentation.\nSo all you need from now is docker and docker-compose on your machine. Ready? Let\u0026rsquo;s go!\nStart-up Microcks with Async features Go to a temporary folder and remove previously downloaded latest images in case you made any other attempt to use Microcks in the past:\n$ cd ~/Development/temp $ docker rmi quay.io/microcks/microcks:latest quay.io/microcks/microcks-async-minion:latest quay.io/microcks/microcks-postman-runtime:latest Then, clone a fresh copy of Microcks Git repository:\n$ git clone https://github.com/microcks/microcks Cloning into \u0026#39;microcks\u0026#39;... remote: Enumerating objects: 10546, done. remote: Counting objects: 100% (1802/1802), done. remote: Compressing objects: 100% (790/790), done. remote: Total 10546 (delta 810), reused 1573 (delta 678), pack-reused 8744 Receiving objects: 100% (10546/10546), 2.68 MiB | 23.28 MiB/s, done. Resolving deltas: 100% (5347/5347), done. Go to the docker-compose installation folder and launch docker-compose with async-addon:\n$ cd microcks/install/docker-compose $ docker-compose -f docker-compose.yml -f docker-compose-async-addon.yml up -d Creating network \u0026#34;docker-compose_default\u0026#34; with the default driver Pulling postman (quay.io/microcks/microcks-postman-runtime:latest)... latest: Pulling from microcks/microcks-postman-runtime cbdbe7a5bc2a: Already exists 95feee427958: Already exists 4123295e9f39: Already exists a59140832df1: Already exists 6504409a8831: Pull complete 9ce8afff0d5c: Pull complete 03f83af2527a: Pull complete f208b202f815: Pull complete Digest: sha256:dc95b935d95a65910b2905853f87befb47fc200ecb6a74a1f719a7f391a40e47 Status: Downloaded newer image for quay.io/microcks/microcks-postman-runtime:latest Pulling app (quay.io/microcks/microcks:latest)... latest: Pulling from microcks/microcks 158b4527561f: Pull complete a3ba00ce78fe: Pull complete e98e956a2ed9: Pull complete 5a89d95041e3: Pull complete abfab39b5884: Pull complete 69b0a8a97d13: Pull complete 15d01b436c7a: Pull complete 824a05dec27f: Pull complete Digest: sha256:65421add5646f597548319040bdf89b87028b3176ef00d9e16c4555dce4f9106 Status: Downloaded newer image for quay.io/microcks/microcks:latest Pulling async-minion (quay.io/microcks/microcks-async-minion:latest)... latest: Pulling from microcks/microcks-async-minion b26afdf22be4: Already exists 218f593046ab: Already exists e339d8c442c9: Pull complete a1d53dd9b348: Pull complete 383dfd0d63fc: Pull complete Digest: sha256:3ae2f6596e8c40fda9ff7cee5d43ee4d1e2c062794696af1ea3374a1d6c35ce6 Status: Downloaded newer image for quay.io/microcks/microcks-async-minion:latest Creating microcks-zookeeper ... done Creating microcks-postman-runtime ... done Creating microcks-db ... done Creating microcks-sso ... done Creating microcks-kafka ... done Creating microcks ... done Creating microcks-async-minion ... done Note that as are we\u0026rsquo;re using latest tagged images here, the sha256 of this ones may vary.\nAfter some minutes, check everything is running. Microcks app is bound on localhost:8080, Keycloak is bound on localhost:18080and Kafka broker is bound on localhost:9092:\n$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3779d9672ea1 quay.io/microcks/microcks-async-minion:latest \u0026#34;/deployments/run-ja…\u0026#34; About a minute ago Up 38 seconds 8080/tcp microcks-async-minion c2d7f3e10215 quay.io/microcks/microcks:latest \u0026#34;/deployments/run-ja…\u0026#34; About a minute ago Up About a minute 0.0.0.0:8080-\u0026gt;8080/tcp, :::8080-\u0026gt;8080/tcp, 8778/tcp, 0.0.0.0:9090-\u0026gt;9090/tcp, :::9090-\u0026gt;9090/tcp, 9779/tcp microcks 7e1f2d2c5305 strimzi/kafka:0.17.0-kafka-2.4.0 \u0026#34;sh -c \u0026#39;bin/kafka-se…\u0026#34; About a minute ago Up About a minute 0.0.0.0:9092-\u0026gt;9092/tcp, :::9092-\u0026gt;9092/tcp, 0.0.0.0:19092-\u0026gt;19092/tcp, :::19092-\u0026gt;19092/tcp microcks-kafka a9b150c73ba2 jboss/keycloak:14.0.0 \u0026#34;/opt/jboss/tools/do…\u0026#34; About a minute ago Up About a minute 8443/tcp, 0.0.0.0:18080-\u0026gt;8080/tcp, :::18080-\u0026gt;8080/tcp microcks-sso 05b0c649ee87 mongo:3.4.23 \u0026#34;docker-entrypoint.s…\u0026#34; About a minute ago Up About a minute 27017/tcp microcks-db ebb420d41691 strimzi/kafka:0.17.0-kafka-2.4.0 \u0026#34;sh -c \u0026#39;bin/zookeepe…\u0026#34; About a minute ago Up About a minute 0.0.0.0:2181-\u0026gt;2181/tcp, :::2181-\u0026gt;2181/tcp microcks-zookeeper 85b842e3e537 quay.io/microcks/microcks-postman-runtime:latest \u0026#34;docker-entrypoint.s…\u0026#34; About a minute ago Up About a minute 3000/tcp Note the different container identifiers as we\u0026rsquo;ll use them later on to check their logs and execute some commands to check everything is running fine.\nLoad a sample and check-up Now, follow the Getting Started guide. First access Microcks on localhost:8080 from your browser and use admin/microcks123 to log in. Then got to the Importers and add a new importer on https://raw.githubusercontent.com/microcks/microcks/master/samples/UserSignedUpAPI-asyncapi.yml URL as specified in Loading samples section.\nYou should have following result:\nCheck the relevant logs on microcks container:\n$ docker logs c2d7f3e10215 ... 12:49:09.245 DEBUG 1 --- [080-exec-9] io.github.microcks.web.JobController : Creating new job: io.github.microcks.domain.ImportJob@2c6712c7 12:49:09.404 DEBUG 1 --- [080-exec-6] io.github.microcks.web.JobController : Getting job list for page 0 and size 20 12:49:09.408 DEBUG 1 --- [80-exec-10] .s.UserInfoHandlerMethodArgumentResolver : Creating a new UserInfo to resolve public org.springframework.http.ResponseEntity io.github.microcks.web.JobController.activateJob(java.lang.String,io.github.microcks.security.UserInfo) argument 12:49:09.408 DEBUG 1 --- [80-exec-10] .s.UserInfoHandlerMethodArgumentResolver : Found a KeycloakSecurityContext to map to UserInfo 12:49:09.409 DEBUG 1 --- [80-exec-10] i.g.m.s.KeycloakTokenToUserInfoMapper : Current user is: UserInfo{name=\u0026#39;null\u0026#39;, username=\u0026#39;admin\u0026#39;, givenName=\u0026#39;null\u0026#39;, familyName=\u0026#39;null\u0026#39;, email=\u0026#39;null\u0026#39;, roles=[manager, admin, user], groups=[]} 12:49:09.410 DEBUG 1 --- [80-exec-10] io.github.microcks.web.JobController : Activating job with id 612cd3c5c34e2146a8bd5b4d 12:49:09.460 DEBUG 1 --- [080-exec-1] .s.UserInfoHandlerMethodArgumentResolver : Creating a new UserInfo to resolve public org.springframework.http.ResponseEntity io.github.microcks.web.JobController.startJob(java.lang.String,io.github.microcks.security.UserInfo) argument 12:49:09.460 DEBUG 1 --- [080-exec-1] .s.UserInfoHandlerMethodArgumentResolver : Found a KeycloakSecurityContext to map to UserInfo 12:49:09.460 DEBUG 1 --- [080-exec-1] i.g.m.s.KeycloakTokenToUserInfoMapper : Current user is: UserInfo{name=\u0026#39;null\u0026#39;, username=\u0026#39;admin\u0026#39;, givenName=\u0026#39;null\u0026#39;, familyName=\u0026#39;null\u0026#39;, email=\u0026#39;null\u0026#39;, roles=[manager, admin, user], groups=[]} 12:49:09.460 DEBUG 1 --- [080-exec-1] io.github.microcks.web.JobController : Starting job with id 612cd3c5c34e2146a8bd5b4d 12:49:09.463 INFO 1 --- [080-exec-1] i.github.microcks.service.JobService : Starting import for job \u0026#39;User signed-up API\u0026#39; 12:49:09.464 INFO 1 --- [080-exec-1] i.g.microcks.service.ServiceService : Importing service definitions from https://raw.githubusercontent.com/microcks/microcks/master/samples/UserSignedUpAPI-asyncapi.yml 12:49:10.092 INFO 1 --- [080-exec-1] i.g.m.u.MockRepositoryImporterFactory : Found an asyncapi: 2 pragma in file so assuming it\u0026#39;s an AsyncAPI spec to import 12:49:10.193 DEBUG 1 --- [080-exec-1] i.g.microcks.service.ServiceService : Service [User signed-up API, 0.1.1] exists ? true 12:49:10.342 DEBUG 1 --- [080-exec-1] i.g.microcks.service.ServiceService : Service change event has been published 12:49:10.342 INFO 1 --- [080-exec-1] i.g.microcks.service.ServiceService : Having imported 1 services definitions into repository 12:49:10.344 DEBUG 1 --- [ task-1] i.g.m.l.ServiceChangeEventPublisher : Received a ServiceChangeEvent on 612ca95fb327764983693ef1 12:49:10.345 INFO 1 --- [080-exec-1] i.github.microcks.service.JobService : Import of job \u0026#39;User signed-up API\u0026#39; done 12:49:10.357 DEBUG 1 --- [ task-1] i.g.microcks.service.MessageService : Found 2 event(s) for operation 612ca95fb327764983693ef1-SUBSCRIBE user/signedup 12:49:11.124 DEBUG 1 --- [ task-1] i.g.m.l.ServiceChangeEventPublisher : Processing of ServiceChangeEvent done ! As stated in the logs, a new API User signed-up API, 0.1.1 has been discovered and is now available within Microcks repository. You can check this browsing the API | Services and discover your API details:\nFrom now, you should start having messages on the Kafka broker. Check the relevant logs on microcks-async-minion container:\n$ docker logs 3779d9672ea1 2021-08-30 12:49:11,234 INFO [io.git.mic.min.asy.AsyncMockDefinitionUpdater] (vert.x-eventloop-thread-0) Received a new change event [CREATED] for \u0026#39;612ca95fb327764983693ef1\u0026#39;, at 1630327750357 2021-08-30 12:49:11,236 INFO [io.git.mic.min.asy.AsyncMockDefinitionUpdater] (vert.x-eventloop-thread-0) Found \u0026#39;SUBSCRIBE user/signedup\u0026#39; as a candidate for async message mocking 2021-08-30 12:49:11,267 INFO [io.git.mic.min.asy.SchemaRegistry] (vert.x-eventloop-thread-0) Updating schema registry for \u0026#39;User signed-up API - 0.1.1\u0026#39; with 1 entries 2021-08-30 12:49:11,424 INFO [io.git.mic.min.asy.pro.ProducerManager] (QuarkusQuartzScheduler_Worker-25) Producing async mock messages for frequency: 10 2021-08-30 12:49:12,424 INFO [io.git.mic.min.asy.pro.ProducerManager] (QuarkusQuartzScheduler_Worker-6) Producing async mock messages for frequency: 3 2021-08-30 12:49:12,425 INFO [io.git.mic.min.asy.pro.KafkaProducerManager] (QuarkusQuartzScheduler_Worker-6) Publishing on topic {UsersignedupAPI-0.1.1-user-signedup}, message: {\u0026#34;id\u0026#34;: \u0026#34;b2R4e1OTjLfp7R4JWDoSQxQvVj92O9IH\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630327752425\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} 2021-08-30 12:49:12,429 INFO [io.git.mic.min.asy.pro.KafkaProducerManager] (QuarkusQuartzScheduler_Worker-6) Publishing on topic {UsersignedupAPI-0.1.1-user-signedup}, message: {\u0026#34;id\u0026#34;:\u0026#34;OvnmDw3rO5LW7LmyZhj40Li9OKzN7htz\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630327752429\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} 2021-08-30 12:49:15,423 INFO [io.git.mic.min.asy.pro.ProducerManager] (QuarkusQuartzScheduler_Worker-2) Producing async mock messages for frequency: 3 2021-08-30 12:49:15,424 INFO [io.git.mic.min.asy.pro.KafkaProducerManager] (QuarkusQuartzScheduler_Worker-2) Publishing on topic {UsersignedupAPI-0.1.1-user-signedup}, message: {\u0026#34;id\u0026#34;: \u0026#34;G5c5UerJHQP2JLKlBQiJS8eudx6KmFGN\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630327755424\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} 2021-08-30 12:49:15,426 INFO [io.git.mic.min.asy.pro.KafkaProducerManager] (QuarkusQuartzScheduler_Worker-2) Publishing on topic {UsersignedupAPI-0.1.1-user-signedup}, message: {\u0026#34;id\u0026#34;:\u0026#34;u6BZ8l1u1LZG3hQH7TtJdWzQDXzq5z54\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630327755426\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} Check the Kafka topic for messages, directly from your machine shell using kafkacat utility and 9092 advertised port:\n$ kafkacat -b localhost:9092 -t UsersignedupAPI-0.1.1-user-signedup -o end % Auto-selecting Consumer mode (use -P or -C to override) % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 356 {\u0026#34;id\u0026#34;: \u0026#34;vcGIcN5mwytIFqtdaEljCRfDrDHg0u3u\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630327965424\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;4m8ZDXMdFTWNR3AmnkT6u3HjXWnwPUEW\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630327965450\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 358 {\u0026#34;id\u0026#34;: \u0026#34;eUVHsjv0VKPtxI7QxOnoEZ3ock6mek3k\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630327968424\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;pzKlqOwucJnO6nVqmOrh7AAT9SFuoflD\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630327968429\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} Yes! 😉\nYou can also connect to the running microcks-kafka container to use the built-in Kafka tools. This time, you access the broker using the kafka:19092 address:\n$ docker exec -it 7e1f2d2c5305 /bin/sh sh-4.2$ cd bin/ sh-4.2$ ./kafka-topics.sh --bootstrap-server kafka:19092 --list UsersignedupAPI-0.1.1-user-signedup __consumer_offsets microcks-services-updates sh-4.2$ ./kafka-console-consumer.sh --bootstrap-server kafka:19092 --topic UsersignedupAPI-0.1.1-user-signedup {\u0026#34;id\u0026#34;: \u0026#34;T1smkgqMAmyb2UVKXDAYKw5Vtx8KD9up\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630328127425\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;NvKLRGG91NsyoK9dj9CGlk2D8NrqaZuC\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630328127429\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;f85zgAtDzvku7Uztp58UDfTokvePJxlg\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1630328130425\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;YbJA2ZeOKVaw0qNbMgMOi3TE3pPtwFM7\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1630328130429\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} ^CProcessed a total of 4 messages sh-4.2$ exit exit That\u0026rsquo;s it! 🎉\nRemoving everything Happy with your Microcks discovery? You can turn off everything and free resources executing this command:\n$ docker-compose -f docker-compose.yml -f docker-compose-async-addon.yml down Stopping microcks-async-minion ... done Stopping microcks ... done Stopping microcks-kafka ... done Stopping microcks-sso ... done Stopping microcks-db ... done Stopping microcks-zookeeper ... done Stopping microcks-postman-runtime ... done Removing microcks-async-minion ... done Removing microcks ... done Removing microcks-kafka ... done Removing microcks-sso ... done Removing microcks-db ... done Removing microcks-zookeeper ... done Removing microcks-postman-runtime ... done Removing network docker-compose_default Join Microcks community! Come and say hi! on our Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter and LinkedIn.\nThanks for reading and supporting us!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.3.0-release/","title":"Microcks 1.3.0 release 🚀","description":"Microcks 1.3.0 release 🚀","searchKeyword":"","content":"We are so proud and happy to share this new major and important Microcks release two months in advance based on our initial roadmap! Yes, this was yet another big challenge 🎉 Kudos to our community users and partners for supporting and pushing us to this momentum.\nNothing could have been done without all your feedback and contributions 👏\nSo why is this release so special? First, We always stay on our principles and we are still applying our mantra for supporting ALL kinds of APIs and being community driven. We work hard and we strongly believe that Microcks is not only an API tooling “by developers, for developers” as it also aims to create a bridge between all the enterprise layers “à la” BizDevSecOps.\nAnd this new release is a big accomplishment as it includes in a single batch :\nTHE first (again…) mocking and testing tools integration that support AsyncAPI Spec v2.1.0 , YES, just one day after the Spec release 💪 A new and very popular AsyncAPI protocol binding 👉 WebSocket, YES\u0026hellip; it is available right now for mocking and testing within Microcks 🎉 A big and very structural add-on is the support of Multi-artifacts for mock definitions: this unlocks some previous limitations and provides a clean way to better interoperate with our ecosystem (ex: Postman collections) and add new specific and tricky protocol bindings to our roadmap, Last but not least and thanks to the new Multi-artifacts feature, we have been able to support and include gRPC communications to our API hunting board 😃 Standards \u0026amp; Protocols\u0026hellip; AsyncAPI Spec v2.1.0 was released on the 29th June, and it include one of our very important contribution :\nThis is amazing for us as it clearly confirms our contract-first vision and strategy. It took us a year to make it happen the way it should always be done 👉 using the standards: big thanks to the AsyncAPI folks and community to support and embrace this Microcks contribution.\nThis also makes Microcks the first tools to support AsyncAPI v2.1 like we have done previously for OpenAPI v3.1 😉 Remember, whether switching your spec version or tooling, Microcks offers you a smooth transition.\nWebSocket, you asked for it: here it is! The WebSocket API is an advanced technology that makes it possible to open a two-way interactive communication session between the user\u0026rsquo;s browser and a server. This is the perfect match with AsyncAPI Spec, see this AsyncAPI blog post for more tech details.\nBased on the number of requests we received from Microcks community to support WebSocket mocking and testing, we have decided to launch a public poll on our Twitter account:\nAs you can see, this was a strong confirmation of users and partners interest regarding WebSocket integration and boosted by this feedback we implemented WebSocket support in just two weeks!\nHow to enable WebSocket mocking and testing? Easy! Simply add a WebSocket binding to your AsyncAPI specification file and Microcks takes care of publishing endpoints in seconds. Check our updated documentation.\nEndless possibilities with Multi-artifacts support Since origin, Microcks has been following the 1 artifact == 1 API mock definition principle. However we did get feedback from the community and are now convinced that this approach can be too restrictive sometimes.\nA use-case that is emerging is that some people may have a single OpenAPI file containing only base/simple examples but are managing complementary/advanced examples using a Postman Collection for instance. Moreover specification formats have their own strengths and weaknesses. We do think there should be some smart way to use them in a complementary way to address complex uses-cases - please pursue your reading to the following gRPC section 😉\nSo from 1.3.0, Microcks is now able to have multiple artifacts (1 primary and some secondary) mapping to 1 API mock definition. The primary one will bring Service and operation metadata as well as examples. The secondary ones will only enrich existing operations with new non-conflicting request/responses and event examples.\nYou may now have multiple artifacts contributing to the same API mocks and tests definition: it opens to endless possibilities and use-case covering. Check documentation. A first demonstration of that is the tricky gRPC support just below that was made possible only thanks to the multi-artifacts support.\nA hug to gRPC fans We always follow from the back seat the gRPC vs REST debates 😇 but clearly understand why some enterprises are intensively relying on gRPC for their backend development\u0026hellip; See this great article from Google for more information.\nIntegrating gRPC within Microcks was not an easy task - mainly due to the fact gRPC uses Protocol Buffers (aka protobuf), which is a data serialization protocol like a JSON or XML. But unlike them, the protobuf is not for humans, serialized data is compiled bytes and hard for the human reading. And for Microcks purposes: it does not include any example notions\u0026hellip; See how it all started here in this discussion on GitHub.\nThanks to Ben Bolton 🙏 pugnacity and help we have been able all together to validate a strong and robust implementation perfectly aligned with our vision and principles. This is one of the beauties of the new and great feature described above : “Multi-artifacts support”. Guess you now understand why it is so important for us as it unlock any new protocols integration in a very clean and smooth way 💥\nCheck out our gRPC usage for Microcks documentation that illustrates how Protocol Buffer specifications and Postman Collection can be combined and used together. You’ll see that defining mocks and tests are as easy as describing requests and responses expectations using JSON. Microcks will do the conversion to protobuf undercover.\nCommunity amplification Community contributions do not come only from feature requests, bug issues and open discussions. What a pleasure to see people relaying our messages, integrating Microcks in demonstration, inviting us to events or even talking about Microcks at events!\nWe’d like to thank the following awesome people:\nJonathan Vila 🙏 that invited us to talk about Microcks at the Barcelona JUG on a session dedicated to Web API Contracts and also for giving us the idea of our new Import API GitHub Action, Dale Lane 🙏 that is including Microcks in some of its blog posts and videos, as he seems to use it a lot when playing with Node-Red. On our side we\u0026rsquo;re ready to collaborate on IBM MQ binding implementation 😉 Shekhar Benarjee 🙏for mentioning Microcks within its AsyncAPI 2.0: Enabling the Event-Driven World manifesto at Ebay engineering, Hugo Guerrero 🙏 for releasing a nice Apache Kafka mock service with Microcks on Dzone as well as having two talks on Kafka Summit APAC and Americas 2021! Be sure to attend its Automated Apache Kafka Mocking and Testing with AsyncAPI. Congrats mate! 💪 There would be many more to mention here so sorry for those we forget but kudos to our amazing growing community: your help, feedback and support is a gift.\nWhat’s coming next? We still have many plans for the coming months and you should stay tuned during the summer time ;-) But as usual, we will be very happy to prioritize depending on community feedback: you can check and collaborate via our list of issues on GitHub.\nRemember that we are open and it means that you can jump on board to make Microcks even greater! Come and say hi! on our Discord chat 🐙, simply send some love through GitHub stars ⭐️ or follow us on Twitter and LinkedIn.\nThanks for reading and supporting us!\nStay safe and healthy and enjoy the summer time ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.2.1-release/","title":"Microcks 1.2.1 release 🚀","description":"Microcks 1.2.1 release 🚀","searchKeyword":"","content":"We are very glad to announce today the 1.2.1 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. This is mainly an “Enhancement release” pushing further the features we introduced within the previous 1.2.0 release.\nWith this release, we are still applying our mantra for supporting ALL kinds of APIs and being community driven. Want some keywords on what’s in this 1.2.1 release? We’ve been working on OpenAPI v3.1, AsyncAPI MQTT and headers support as well as user experience support around Tests and Installation through Podman support.\nLet’s have a quick review on what’s new and what it brings to our users.\nStandards \u0026amp; Protocols… OpenAPI v3.1 was released on the 18th February, exactly three days before the 1.2.0 so that we were not able to embed its support at that time. It is now done! We make sure that you may be able to use this new version without any issue, adding a dedicated test suite for that.\nThis makes Microcks one of the first tools to embrace OpenAPI v3.1 as mentioned by @apisyouwonthate. Whether switching your spec version or tooling, Microcks offers you a smooth transition.\nEvent Driven Architecture (EDA) is all the rage today in cloud-native era, as it brings you space and time decoupling as well as better resiliency and elasticity. However people struggle with picking the right specification: AsyncAPI or CloudEvents? Why not both? We demonstrate in Simulating CloudEvents with AsyncAPI and Microcks the benefits it brings. And we add support for AsyncAPI specification headers to make it work!\nCloudEvents simulation and compliance testing is at finger tip with Microcks! The mechanism we detailed makes Microcks suitable for any messaging envelope standard. See issue #360 for more details.\nStill in the EDA space, you may know that we introduced MQTT support in the previous release but we did not grasp all the subtleties of it ;-) Thanks to some Solace contributions we did fix channels naming conventions and add support for channel parameters as well.\nWith these community contributions, the AsyncAPI spec coverage in Microcks is near complete and makes it the most comprehensive tooling for managing, testing and governing your EDA assets. See issues #363, #378 and #379 for more details.\nAnd this is a nice transition to remind you how Microcks roadmap is\u0026hellip;\n… driven by community feedback! 🎉 Kudos to our community for great interactions, feedback, enhancements proposals and contributions these last two months! We’re very proud having achieved 350 🌟 on GitHub last week! Here are some noticeable contributions we integrated within the 1.2.1 release.\nJonathan Vila 🙏 suggested many tests enhancements: the use of Secrets for authentication, tests timeouts, tests replays and expression languages in request will drastically improve the testing experience, Nicolas Massé 🙏 - a Fedora geek ;-) - contributed the Podman Compose support for Microcks as a more secure alternative to Docker on your laptop. Nicolas also write a nice introduction on our blog as well as a more in-depth article on Red Hat Developers, fogoforth 🙏 implemented the SOAP 1.2 support in Microcks, fixing the incoming version detection and the response content-type. Thanks for what seems to be your first-time contrib to Open-Source, Roxana Sterca 🙏 discovered an un-documented feature and helped fixing and validating the new SCRIPT dispatcher for REST mocking. Stay tuned for some documentation on this feature in forthcoming weeks. There would be many more to mention here so thanks a lot to those we didn’t mention here but help giving useful feedback everyday.\nWhat’s coming next? In just a little more than two months since the previous 1.2.0 release, we have been able to do a lot thanks to your ideas and help.\nWe have many plans for the coming months but will be very happy to prioritize depending on community feedback : Websocket, gRPC, GraphQL\u0026hellip; What’s why we put substantial efforts creating several issues on GitHub to detail what options we have in front of us. Please use them to react and vote for your preferred ones to allow us prioritize the backlog!\nRemember that we are open and it means that you can jump on board to make Microcks even greater! Come and say hi! on our Discord chat 🐙 , simply send some love through GitHub stars ⭐️ or follow us on Twitter.\nThanks for reading and supporting us! Stay safe and healthy. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/simulating-cloudevents-with-asyncapi/","title":"Simulating CloudEvents with AsyncAPI and Microcks","description":"Simulating CloudEvents with AsyncAPI and Microcks","searchKeyword":"","content":" TL;DR: CloudEvents and AsyncAPI are complementary specifications that help define your Event Driven Architecture. Microcks allows simulation of CloudEvent to speed-up and ensure autonomy of development teams.\nThe rise of Event Driven Architecture (EDA) is a necessary evolution step towards cloud-native applications. Events are the ultimate weapon to decouple your microservices within your architecture. They are bringing great benefits like space and time decoupling, better resiliency and elasticity.\nBut events come also with challenges! One of the first you are facing when starting up as a development team - aside the technology choice - is how to describe these events structure? Another challenge that comes very quickly after being: How can we efficiently work as a team without having to wait for someone else\u0026rsquo;s events?\nWe\u0026rsquo;ll explore those particular two challenges and see how to simulate events using CloudEvents, AsyncAPI and Microcks.\nCloudEvents or AsyncAPI? New standards like CloudEvents or AsyncAPI came up recently to address this need of structure description. People keep asking: Should I use CloudEvents or AsyncAPI? There is the belief that CloudEvents and AsyncAPI are competing on the same scope. I see things differently, and I\u0026rsquo;d like to explain to you why. Read on!\nWhat is CloudEvents? From cloudevents.io:\nCloudEvents is a specification for describing event data in common formats to provide interoperability across services, platforms, and systems.\nCloudEvents purpose is to establish a common format for event data description and they are part of the CNCF\u0026rsquo;s Serverless Working Group. A lot of integrations already exist within Knative Eventing, Trigger Mesh or Azure Event Grid ; allowing a true cross-vendor platform interoperability.\nThe CloudEvents specification is focused on the events and defines a common envelope (set of attributes) for your application event. See this example from their repo:\nThis is a structured CloudEvent. As of today, CloudEvent propose two different content modes for transferring events: structured and binary.\nHere your event data is actually the \u0026lt;much wow=\\\u0026quot;xml\\\u0026quot;/\u0026gt; XML node but it can be of any type. CloudEvents takes care of defining meta information about your event but does not really help you define the actual content of your event.\nWhat is AsyncAPI? From asyncapi.com:\nAsyncAPI is an industry standard for defining asynchronous APIs. Our long-term goal is to make working with EDAs as easy as it is to work with REST APIs.\nSo here\u0026rsquo;s a new term here: API. API implies talking about application interaction and capabilities. AsyncAPI can indeed be seen as the sister specification of OpenAPI but targeting asynchronous protocols based on event brokering.\nAsyncAPI is focused on the application and the communication channels it uses. Unlike CloudEvents, AsyncAPI does not impose how your events should be structured. However, AsyncAPI provides extended means to precisely define the event\u0026rsquo;s format. It can be the meta information and the actual content. See an example:\nFrom this example, you can see the definition of a User signed-up event, that is published to the user/signedup channel. These events have 3 properties: fullName, email and age that are defined using the semantics coming from JSON Schema. Also - but not shown in this example - AsyncAPI allows us to specify event headers and whether these events will be available through different protocol bindings like Kafka, AMQP, MQTT or WebSocket.\nCloudEvents with AsyncAPI From the example and explanations above, you see that both standards are tackling different scopes! We can actually combine them to achieve a complete specification of an event: including application definition, channels description, structured envelope and detailed functional data carried by the event.\nThe global idea of a combination is to use the AsyncAPI specification as a hosting document. It will hold references to CloudEvents attributes and add some more details on the event format.\nThere are two mechanisms we can use in AsyncAPI to ensure this combination. Choosing the correct mechanism may depend mainly on the protocol you\u0026rsquo;ll choose to convey your events. Things aren\u0026rsquo;t perfect yet and you\u0026rsquo;ll have to make a choice 🤨.\nLet\u0026rsquo;s take the example of using Apache Kafka to distribute events.\nIn the structured content mode, CloudEvents meta-information are tangled with the data in the messages value. For that mode, we\u0026rsquo;ll use the JSON Schema composition mechanism that is accessible from AsyncAPI, In the binary content mode (that may use Avro), CloudEvents meta-information are dissociated from message value and projected on messages headers. For that, we\u0026rsquo;ll use the MessageTrait application mechanism present in AsyncAPI. Structured content mode Let\u0026rsquo;s move our previous AsyncAPI example so that it can reuse CloudEvents in structured content mode. Here\u0026rsquo;s the completed definition:\nThe important things to notice here are:\nThe definition of headers on line 16. Containing our application custom-header as well as the mandatory CloudEvents content-type, The inclusion of CloudEvents spec on line 33, reusing this specification as a basis for our message, The refining of the data property description on line 36. Binary content mode Let\u0026rsquo;s do the same thing as our previous AsyncAPI example but now applying the binary content mode. Here\u0026rsquo;s the completed definition:\nThe important things to notice here are:\nThe application of a trait at the message level on line 16. The trait resource is just a partial AsyncAPI document containing a MessageTrait definition. This trait will bring all the mandatory attributes (ce_*) from CloudEvents. It is indeed the equivalent of the CloudEvents JSON Schema. This time we\u0026rsquo;re specifying our event payload using an Avro schema as specified on line 25. What are the benefits? Whatever the content mode you chose, you now have a comprehensive description of your event and all the elements of your Even Driven Architecture! Not only you are guaranteeing its low-level interoperability with the ability to be routed and trigger some function in a Serverless world ; but you also bring complete description of the carried data that will be of great help for applications consuming and processing events.\nSimulating CloudEvents with Microcks Let\u0026rsquo;s tackle the the second challenge: How can we efficiently work as a team without having to wait for someone else\u0026rsquo;s events? We saw just above how we can fully describe our events. However it would be even better to have a pragmatic approach for leveraging this CloudEvents + AsyncAPI contract\u0026hellip; And that\u0026rsquo;s where Microcks comes to the rescue 😎\nWhat is Microcks? Microcks is an Open source Kubernetes-native tool for mocking/simulating and testing APIs. One purpose of Microcks is to turn your API contract (OpenAPI, AsyncAPI, Postman Collection) into live mocks in seconds. It means that once it has imported your AsyncAPI contract, Microcks start producing mock events on a message broker at a defined frequency.\nUsing Microcks you can then simulate CloudEvents in seconds, without writing a single line of code. Microcks will allow the team relying on input events to start working without waiting for the team coding the event publication.\nUse it for CloudEvents How Microcks is doing that? Simply by re-using examples you may add to your contract. We omitted the examples property before but let see now how to specify such examples for the binary content mode on line 27:\nSome interesting things to notice here:\nYou can put as many examples as you want as this is a map in AsyncAPI, You can specify both headers and payload values, Even if payload will be Avro-binary encoded, you use YAML or JSON to specify examples, You may use templating functions using the {{ }} notation to introduce some random or dynamic values Once imported into Microcks, it is discovering the API definition as well as the different examples. It starts immediately producing mock events on the Kafka broker it is connected to - each and every 3 seconds here.\nSince release 1.2.0, Microcks is also supporting the connection to a Schema Registry. Therefore it publishes the Avro schema used at mock message publication time. Using the kafkacat CLI tool, it\u0026rsquo;s then easy to connect to the Kafka broker and registry - we\u0026rsquo;re using here the Apicurio Service Registry - to inspect the content of mock events:\n$ kafkacat -b my-cluster-kafka-bootstrap.apps.try.microcks.io:9092 -t UsersignedupCloudEventsAPI_0.1.3_user-signedup -s value=avro -r http://apicurio-registry.apps.try.microcks.io/api/ccompat -o end -f \u0026#39;Headers: %h - Value: %s\\n\u0026#39; --- OUTPUT % Auto-selecting Consumer mode (use -P or -C to override) % Reached end of topic UsersignedupCloudEventsAPI_0.1.3_user-signedup [0] at offset 276 Headers: sentAt=2020-03-11T08:03:38Z,content-type=application/avro,ce_id=7a8cc388-5bfb-42f7-8361-0efb4ce75c20,ce_type=io.microcks.example.user-signedup,ce_specversion=1.0,ce_time=2021-03-09T15:17:762Z,ce_source=/mycontext/subcontext - Value: {\u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36} % Reached end of topic UsersignedupCloudEventsAPI_0.1.3_user-signedup [0] at offset 277 Headers: ce_id=dde8aa04-2591-4144-aa5b-f0608612b8c5,sentAt=2020-03-11T08:03:38Z,content-type=application/avro,ce_time=2021-03-09T15:17:733Z,ce_type=io.microcks.example.user-signedup,ce_specversion=1.0,ce_source=/mycontext/subcontext - Value: {\u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36} % Reached end of topic UsersignedupCloudEventsAPI_0.1.3_user-signedup [0] at offset 279 We can check that the emitted events are respecting both the CloudEvents meta-information structure and the AsyncAPI data definition. Moreover, each event has some different random attributes allowing it to simulate diversity and variation for the consuming application.\nWrap-up We\u0026rsquo;ve learned in this - quite long 😉 - blog post how to solve some of challenges that come with EDA.\nFirst we\u0026rsquo;ve described how recent standards like CloudEvents and AsyncAPI are focusing on different scopes: the event for CloudEvents and the application for AsyncAPI.\nThen we have demonstrated how both specifications can be combined to provide a comprehensive description of all the elements involved in Event-Driven Architecture: application definition, channels description, structured envelope and detailed functional data carried by the event. Both specifications are complementary and using one or both is matter on how deep you want to go in this formal description.\nFinally, we\u0026rsquo;ve seen how Microcks can be used to simulate any events based on AsyncAPI - CloudEvents one included - just by using examples. It answers the challenge of working, testing and validating in autonomy when different development teams are using EDA.\nI hope you learned something new, if so, please consider reacting, commenting or sharing.\nThanks for reading! 👋\n"},{"section":"Blog","url":"https://microcks.io/blog/podman-compose-support/","title":"Podman Compose support in Microcks","description":"Podman Compose support in Microcks","searchKeyword":"","content":"While Docker is still the #1 option for software packaging and installation on the developer laptop, Podman is gaining traction. Podman advertises itself as a drop-in replacement for Docker. Just put alias podman=docker and you would be good to go, they said 😉\nWhilst the reality is a bit more nuanced, we made the necessary adjustment to make it as simple. Today it is a pleasure to contribute back this adaptation to the Microcks community! It will allow Podman early and happy adopters - like me - to run Microcks on their laptop in the safest way.\nStarting as of version 1.2.0 of Microcks, we thus announce the Podman Compose support for quickly getting started with Microcks on your laptop. We still recommend using Kubernetes ☸️ for serious use-cases 😉\nGive it a try! As explained in the Installing with podman-compose doc, you should first ensure that you have Podman and Podman Compose packages installed.\nThen it\u0026rsquo;s just a matter of cloning the repository, navigating to correct folder and running our supporting script that runs Podman in rootless mode:\n$ git clone https://github.com/microcks/microcks.git $ cd microcks/install/podman-compose $ ./run-microcks.sh Running rootless containers... Discovered host IP address: 192.168.3.102 Starting Microcks using podman-compose ... ------------------------------------------ Stop it with: podman-compose -f microcks.yml --transform_policy=identity stop Re-launch it with: podman-compose -f microcks.yml --transform_policy=identity start Clean everything with: podman-compose -f microcks.yml --transform_policy=identity down ------------------------------------------ Go to https://localhost:8080 - first login with admin/123 Having issues? Check you have changed microcks.yml to your platform using podman version: podman version 2.1.1 podman run [...] 🎉 This will start the required containers and setup an simple environment for your usage.\nOpen a new browser tab and point to the http://localhost:8080 endpoint. This will redirect you to the Keycloak Single Sign On page for login. Use the following default credentials (admin/123) to login into the application and start using Microcks.\nWant to see what\u0026rsquo;s running? Check the running containers with:\n$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 68faf7825db1 quay.io/microcks/microcks:latest 8 seconds ago Up 7 seconds ago 0.0.0.0:8080-\u0026gt;8080/tcp microcks 71af3326ba9d docker.io/jboss/keycloak:10.0.1 -b 0.0.0.0 -Dkeyc... 9 seconds ago Up 9 seconds ago 0.0.0.0:8180-\u0026gt;8080/tcp microcks-keycloak 5f5ee84c76fd quay.io/microcks/microcks-postman-runtime:latest node app.js 10 seconds ago Up 10 seconds ago 0.0.0.0:3000-\u0026gt;3000/tcp microcks-postman-runtime d2e8d1066c48 docker.io/library/mongo:3.4.23 mongod 11 seconds ago Up 11 seconds ago 0.0.0.0:27017-\u0026gt;27017/tcp microcks-mongo Want to have more? Podman adopt a very different architecture from Docker: it involves no daemon at all and can run as a regular user (rootless mode) or as root (rootfull mode).\nIf your a Podman user and hapy with it (or if you struggle making it working 😉) come and say hi! on our Discord chat 🐙\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.2.0-release/","title":"Microcks 1.2.0 release 🚀","description":"Microcks 1.2.0 release 🚀","searchKeyword":"","content":"We are delighted to announce the 1.2.0 release of Microcks - the Open source Kubernetes-native tool for API Mocking and Testing. With this new release, we are pursuing further our vision of a unique tool and consistent approach for speeding up the delivery and governing the lifecycle of ALL kinds of APIs - whether synchronous or asynchronous.\nIn this release, we put a lot of effort (and love ❤️) into listening and implementing feedback and ideas from our community. Three major things came as requests and feedback that made the key theme for this release:\nPeople are finding Apache Kafka everywhere and tightly coupled with Apache Avro. As a result, we have added that feature! Also, people want to use Microcks for the Internet of Things world and need another protocol binding. Hence, we have added MQTT support! Users are looking for advanced logic in their OpenAPI mocking. So, we implemented enhancements to have the smartest engine! As an Open Source project made for Enterprise usage, one major directive is ecosystem integration. You will see on this post that we take care of making Microcks working with many vendor’s products - could it be for registries, message brokers, or even Kubernetes distribution.\nLet’s do a review of what’s new on each one of our highlights without delay.\nAvro \u0026amp; Schema Registry support With this new release, Microcks is now supporting Apache Avro encoding for extra-small messages. Avro is a compact binary format that is largely used in the Big Data and Apache Hadoop ecosystems. It is also very popular on top of Apache Kafka as it allows to make reliable the exchange of messages through the use of Avro schemas.\nWhen Avro is used with Kafka, it is also common to have a registry for easily sharing schemas with consuming applications. Microcks can now integrate with your organization schema registries in order to:\nspeed-up the process of propagating Avro schema updates to API events consumers, detect any drifting issues between the expected Avro schema and the one effectively used. Microcks have been successfully tested with both Confluent Schema Registry and Apicurio Service Registry. You can find full documentation on this feature on our Kafka, Avro and Schema Registry guide.\nMQTT support The Message Queuing Telemetry Transport protocol (MQTT) is a standard messaging protocol for the Internet of Things (IoT). It is used today in a wide variety of industries, such as automotive, manufacturing, telecommunications, oil and gas, etc. We receive some massive push from community users for adding MQTT support and are now happy to announce that version 3.1.1 of MQTT is the second supported messaging protocol on Microcks!\nThanks to the excellent AsyncAPI Specification and its support in Microcks, you are now able to design your API and produce mocks with multi-binding support! You define your API once, and the Microcks tooling will take care of publishing mocks and testing messages using one or both protocols.\nMicrocks have been successfully tested with ActiveMQ Artemis as well as Eclipse Mosquitto. Check out full documentation on MQTT Mocking and Testing here\nOpenAPI enhancements Aside from the major new features around Avro and MQTT support, we also deliver significant enhancements on the OpenAPI mocking and testing.\nWe have added a lot of new templating functions that will allow Microcks to generate dynamic meaningful mock responses. You can now easily use randomFullName(), randomStreetAddress() or randomEmail() functions in your examples to have smart and always different mocks. Moreover, we introduced notation compatibility with Postman Dynamic variables so that you can reuse your existing Postman Collection without any change.\nWe have also added a new FALLBACK dispatcher that helps to define default responses and advanced behavior for your mocks.\nThanks a lot to our community users 🙏 - @gkleij, @robvalk and @ChristianHauwert - that suggested enhancements and helped to validate them. Check our documentation on Template functions here and have a look at our blog post introducing Fallback and advanced dispatching.\nGet started with a streamlined installation experience Developer experience is of great importance for us and we worked to make it even simpler to get started with Microcks. Docker-compose based install has been drastically improved and does not require any configuration for you to start up! Installation procedures now all contain default users so that you can start playing immediately.\nKubernetes Operator install has also been simplified by providing one-liner installation (well actually, it’s two lines 😉):\n$ kubectl apply -f https://microcks.io/operator/operator-latest.yaml -n microcks $ curl https://microcks.io/operator/minikube-minimal.yaml -s | sed \u0026#39;s/KUBE_APPS_URL/\u0026#39;$(minikube ip)\u0026#39;.nip.io/g\u0026#39; | kubectl apply -n microcks -f - While at first look it looks simpler, the installation has been enhanced to adapt to any Kube configuration and advanced users will now have the ability to specify resource utilization for the different components.\nThanks a lot to our community users 🙏 - @hguerrero,@dicolasi and @abinet - who pushed for simplification and helped us tracking, fixing and validating all these issues. Please check our new Getting Started videos available on Home Page or through our YouTube channel.\nWhat’s coming next? In just a little more than three months since the previous 1.1.0 release, we have been able to do a lot thanks to your ideas and help. Kudos for being so supportive and pushing Microcks up!\nWe have many plans for the coming months but will be very happy to prioritize depending on community feedback. On top of our head we are planning to work on:\nprotocol binding addition for Async API (AMQP seems a good candidate at the moment), community sharing of mocks and tests for regulatory or industrial standards, more metrics and analytics to govern your APIs with Microcks. Remember that we are open and it means that you can jump on board to make Microcks even greater! Come and say hi! on our Discord chat 🐙 , simply send some love through GitHub stars ⭐️ or follow us on Twitter.\nThanks for reading and supporting us! Stay safe and healthy. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/advanced-dispatching-constraints/","title":"Advanced Dispatching and Constraints for mocks","description":"Advanced Dispatching and Constraints for mocks","searchKeyword":"","content":"The purpose of this post is to explain the advanced dispatching and constraint features available when mocking a REST API using Microcks. As I recently went again through the documentation answering questions on our Discord chat, I realized that all the pieces were present but we did not have any overall example showing how to use them!\nSo I setup this new example based on a real life use-case our community users have submitted. It is based on a very simple WeatherForecast API that has just one GET endpoint for fetching the forecast. This endpoint has two query parameters:\nregion (one of the four cardinal points) allows to specify the zone to fetch, apiKey is a parameter allowing to identify API caller and apply tracing, rate limits and so on\u0026hellip; Photo by Jordan Madrid on Unsplash We\u0026rsquo;ll see how to configure advanced mocking rules in Microcks so that requests will be routed to correct responses based on region value and apiKey will be checked as mandatory even if we do not care of the actual value. If user specified an unknown region, the mock should return a fallback response.\nLet\u0026rsquo;s start! Let\u0026rsquo;s start by importing the below OpenAPI contract into your running Microcks instance. As this is a GitHub gist you may easily retrieve it. If you already have many APIs in the repository, you\u0026rsquo;ll find this one having the name WeatherForecast API with version 1.0.0.\nSome important things to notice in this OpenAPI specification:\nThere\u0026rsquo;s a single GET operation definition starting at line 16, We defined north, east, west and south examples for 200 response - see lines 50 to 74 - as well as examples with the same names for region query parameter - see lines 23 to 29, We defined an unknown example for the 404 response - see lines 82 and 83 - as well as an example with same name for query parameter - see line 21, We defined an apiKey query parameter starting at line 37 but did not specify any example as it makes no sense for random values. Once imported into Microcks, you should have the same result as below screenshot:\nSome important things to notice here on how Microcks has interpreted the data coming from the OpenAPI specification:\nIt has inferred that this dispatcher will be based on URI_PARAMS (see Your 1st REST mock for introduction on dispatchers), Is has inferred that this dispatcher will take care of two parameters being region and apiKey, It has discovered 5 request/response sample pairs (see OpenAPI Usage Conventions for detailed explanations). Each request is holding an example Mock URL for invoking it. As soon as it has been imported, new mock endpoints are available and you can start playing around with the mocks like illustrated with below commands:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=east -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;east\u0026#34;, \u0026#34;temp\u0026#34;: -6.6, \u0026#34;weather\u0026#34;: \u0026#34;frosty\u0026#34;, \u0026#34;visibility\u0026#34;: 523 } $ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=north -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;north\u0026#34;, \u0026#34;temp\u0026#34;: -1.5, \u0026#34;weather\u0026#34;: \u0026#34;snowy\u0026#34;, \u0026#34;visibility\u0026#34;: 25 } OK! So the default is working pretty well but we\u0026rsquo;ll need to add our constraint related to apiKey and manage our fallback response as well 😉\nAdding constraint We need to add constraint on apiKey query parameter so that requests that do not have this parameter will be refused by Microcks. In Microcks you can easily add constraints to an operation when accessing the Edit Properties page from API summary. You\u0026rsquo;ll just need to have the manager or admin role assigned.\nOnce on the properties edition for the GET /forecast operation, add a new constraint like illustrated below:\nDo not forget to hit the Save button and then you can re-try calling a mock endpoint:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=east -k Parameter apiKey is required. Check parameter constraints.% 🎉 Perfect! Our constraint now applies correctly. Getting back on the API summary page and looking at the operation details, you\u0026rsquo;ll see that a new Parameter Constraints block has appeared explaining the constraint:\nSo far so good but now let\u0026rsquo;s try adding the apiKey parameter to our requests:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=east\\\u0026amp;apiKey\\=qwertyuiop -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;north\u0026#34;, \u0026#34;temp\u0026#34;: -1.5, \u0026#34;weather\u0026#34;: \u0026#34;snowy\u0026#34;, \u0026#34;visibility\u0026#34;: 25 } $ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=west\\\u0026amp;apiKey\\=qwertyuiop -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;north\u0026#34;, \u0026#34;temp\u0026#34;: -1.5, \u0026#34;weather\u0026#34;: \u0026#34;snowy\u0026#34;, \u0026#34;visibility\u0026#34;: 25 } Seems to be OK at first sight but wait\u0026hellip; we are now receiving the same response whatever the requested region! What the hell!? 🧐\nAdjusting dispatcher rules The problem is now that we supply apiKey and remember that apiKey belongs to the dispatching rules. When receiving a request, Microcks is looking for a response associated to the qwertyuiop value for apiKey and because we do not have defined examples for apiKey it finds nothing\u0026hellip; Its fallback behaviour is then to answer with the first response it finds - here the north response.\nFrom there you have two options:\nDefine a set of possible values for apiKey within the OpenAPI specification examples. This will add complexity and a number of examples to manage if you\u0026rsquo;re managing combinations of parameters, Simply tall Microcks to not worry about the apiKey value when dispatching to a response. This makes a lot of sense here as this parameter is purely technical! Obviously we choose the second option and get back to the Edit Properties page for this operation. Just below the parameter constraints we have used previously, we have the ability to change the dispatching properties. We\u0026rsquo;ll simply tell Microcks to keep the current dispatcher but we\u0026rsquo;ll adapt the rules to only let region as the sole dispatching criterion:\nOnce saved, you will be able to test again the different mock URLs for the four regions and you\u0026rsquo;ll see that now you\u0026rsquo;re getting the response associated with each requested region:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=west\\\u0026amp;apiKey\\=qwertyuiop -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;west\u0026#34;, \u0026#34;temp\u0026#34;: 12.2, \u0026#34;weather\u0026#34;: \u0026#34;rainy\u0026#34;, \u0026#34;visibility\u0026#34;: 300 } $ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=south\\\u0026amp;apiKey\\=qwertyuiop -k -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;south\u0026#34;, \u0026#34;temp\u0026#34;: 28.3, \u0026#34;weather\u0026#34;: \u0026#34;sunny\u0026#34;, \u0026#34;visibility\u0026#34;: 1500 } 🎉 Excellent! We solved our routing issue. But let\u0026rsquo;s try now with an unknown center region:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=center\\\u0026amp;apiKey\\=qwertyuiop -s | jq . { \u0026#34;region\u0026#34;: \u0026#34;north\u0026#34;, \u0026#34;temp\u0026#34;: -1.5, \u0026#34;weather\u0026#34;: \u0026#34;snowy\u0026#34;, \u0026#34;visibility\u0026#34;: 25 } We still got default fallback response because Microcks cannot find any response associated with the center region\u0026hellip; 🤨\nChanging dispatcher In order to address our final requirement - having a proper 404 response if region is unknown - we will have to change the dispatcher that was inferred by Microcks. Let\u0026rsquo;s get back to the Edit Properties page for the operation once again and now change the dispatcher to FALLBACK value. You\u0026rsquo;ll see documentation appearing on the right with the ability to copy-paste the example.\nThe FALLBACK dispatcher is a new feature from 1.2.0 release. Depending on the day you are reading this post it may be possible that the release it not yet available. If you need it urgently please use the latest version of Microcks. This feature is already enabled there and will be available till 1.2.0 announcements.\nThe FALLBACK dispatcher behaves kinda like a try-catch wrapping block in programming: it will try applying a first dispatcher with its own rule and if it find nothings it will default to the fallback response. Configure this dispatcher as shown below: picking the unknown response as the one representing our fallback.\nHit the Save button and test again the previous curl command, you\u0026rsquo;ll see that you\u0026rsquo;re now receiving the 404 response called unknown:\n$ curl https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast\\?region\\=center\\\u0026amp;apiKey\\=qwertyuiop -k Region is unknown. Choose in north, west, east or south.% 🎉 TADAM! Now when getting back the API summary page and checking the GET /forecast operation details, you\u0026rsquo;ll see that dispatcher and dispatching rules have been updated to display your new configuration:\nWrap-up In this blog post, we walked through a near real-life sample explaining Microcks default dispatching engine as well as advanced customizations available. We saw that default configuration deduced only from the OpenAPI specification is a great start but does not allow to handle more advanced scenario where we need little smartness. Microcks is proposing advanced constructs for Parameters Constraints and Dispatching Rules: we only scratched the surface here!\nYou may think that setting up these configuration may be cumbersome but remember that you\u0026rsquo;ll only have to do it once! Microcks will keep your customizations upon subsequent imports - as long as you keep the same operation name of course 😉\nAs a primer on what\u0026rsquo;s coming next on Microcks, we plan to integrate some OpenAPI Specifications Extensions so that these customizations could live directly within the specification file:\npaths: /forecast: get: operationId: GetForecast summary: Get forecast for region x-microcks-dispatcher: FALLBACK x-microcks-dispatcherRules: dispatcher: URI_PARAMS dispatcherRules: region fallback: unknown If interested in this feature, do not hesitate commenting or voting for the GitHub issue!\nTake care and stay tuned. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/continuous-testing-all-your-apis/","title":"Continuous Testing of ALL your APIs","description":"Continuous Testing of ALL your APIs","searchKeyword":"","content":"We talk a lot about asynchronous API lately at Microcks! We added many new innovative features taking advantage of the AsyncAPI specification. These are nice additions but we do not want them to hide the foundational essence of Microcks: offering you a consistent approach whatever the type of API. See our Why Microcks ? post for a refresher.\nWith this post we want to demonstrate how traditional REST API and event-based API can be used together and how Microcks can leverage your OpenAPI and AsyncAPI assets to ease the testing of scenarios involving both of them. It is a follow-up of our Microcks 1.1.0 release notes and our Apache Kafka Mocking and Testing previous posts where we detailed usages of Microcks for asynchronous APIs.\nOpenAPI \u0026amp; AsyncAPI scopes Let’s imagine this simple use-case: you are designing a new application for registering users in your system. We always need to register and welcome new users 😉 Obviously, some other parts of your information systems will also need to know when a new user registered so that they can - for example - send a welcome email, initialize the fidelity account, fill the CRM with basic information and so on.\nThe best practices in system design are clearly promoting separation of concerns and loose coupling. Thus you may build the high-level design below mixing :\nService Oriented Architecture (SOA) for blocking interaction with the user performing registration, Event Driven Architecture (EDA) for asynchronous and non-blocking interaction made by systems reacting on user registration. To specify the contract of these interaction you ended up designing two APIs :\n1 synchronous REST API that will allow the actual registration, 1 asynchronous event-based API that will publish a User Signed Up message each and every time a registration succeeds. This message will be consumed by the Email, CRM, Marketing systems and any other future usages. And that’s the time where OpenAPI and AsyncAPI enter the game! You will use them to describe the protocol semantics you plan to use (HTTP verbs, message broker topics, security schemes, \u0026hellip;) and the syntactic definitions of exchanged data.\nWe can see that OpenAPI and AsyncAPI are addressing different and complementary scopes of API contract definition. Even if different, you will surely benefit from having a consistent approach while governing them and feature parity when it comes to accelerating delivery.\nOpenAPI \u0026amp; AsyncAPI testing altogether Having the feature parity between synchronous and asynchronous APIs in Microcks opens the door to many new ways of efficiently testing components that provide and implement both API types. Once loaded into Microcks, you will have access to both API definitions including semantic and syntactic elements.\nUsing Microcks for mocking both APIs will tremendously accelerate things! Allowing the different teams to start working without waiting for each others! The mobile team will start developing the mobile frontend using REST mocks, the backend team will start working on the backend and the CRM and email system team will start receiving mock messages coming from Microcks.\nBut using Microcks for testing will also ensure you will be able to reconnect the dots and validate everything - automatically! The better being that Microcks allows testing of REST API using the same tooling and the same code as the ones used for event-driven API.\nThat is what we have demonstrated using the following CI/CD pipeline. For each and every code change in the Git repository, this pipeline is:\nBuilding and deploying the application - pretty classic 😉 Starting a first parallel branch where it will ask Microcks to listen to the Kafka topic used by the application to publish messages. This is the test-asyncapi step, Starting a second parallel branch where it will ask Microcks to test the REST API endpoints - and do this 2 times on 2 different API versions. Theses are the test-openapi-v1 and test-openapi-v2 steps, The branches finally join and the application is promoted. The beauty of it is that the promotion is done ONLY IF the REST API endpoints are compliant with the corresponding OpenAPI specification AND the invocation of this APIs have triggered the publication of messages on Kafka AND these messages are all valid regarding the event-based API AsyncAPI specification. Wouah! 🎉\nWondering about the plumbing part of the pipeline? What does the code look like? Is it complex to understand, write and maintain?\nFor this demonstration, we have used Microcks Tekton task so it’s basically YAML. Principles remain the same whatever the pipeline technology used. Here’s below the YAML for launching a test on the REST API:\n- name: test-openapi-v1 taskRef: name: microcks-test runAfter: - deploy-app params: - name: apiNameAndVersion value: \u0026#34;User registration API:1.0.0\u0026#34; - name: testEndpoint value: http://user-registration-user-registration.KUBE_APPS_URL - name: runner value: OPEN_API_SCHEMA - name: microcksURL value: https://microcks-microcks.KUBE_APPS_URL/api/ - name: waitFor value: 8sec And here’s below the YAML for launching a test on the Async API, they’re pretty similar exception the testEndpoint and the runner used:\n- name: test-asyncapi taskRef: name: microcks-test runAfter: - deploy-app params: - name: apiNameAndVersion value: \u0026#34;User signed-up API:0.1.1\u0026#34; - name: testEndpoint value: kafka://my-cluster-kafka-bootstrap-user-registration.KUBE_APPS_URL:443/user-signed-up - name: runner value: ASYNC_API_SCHEMA - name: microcksURL value: https://microcks-microcks.KUBE_APPS_URL/api/ - name: waitFor value: 20sec - name: secretName value: user-registration-broker This demonstration is using Tekton pipelines but can also be implemented using Jenkins or GitLab CI by using either our Jenkins plugin or our portable CLI tool.\nWant to play with it? Excited about the possibilities that it will offer you? Thinking about your next pipeline that will test both types of APIs and validate all your events triggering rules? Wondering about chaining Dev to QA to Production promotion including tests on different brokers and endpoints?\nThe opportunities are endless and we provide real code allowing you to try them. This whole User Registration demo can be found on our GitHub repository with all the instructions to deploy it and run it on your Kubernetes cluster. Do not hesitate trying it out and sending us feedback or ideas on what you want to see next via our Discord chat 🐙\nThanks for reading and take care. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/integrating-in-apicurio-keycloak/","title":"Integrating Microcks into Apicurio Keycloak","description":"Integrating Microcks into Apicurio Keycloak","searchKeyword":"","content":"Microcks is an amazing tool that helps developers mock their APIs seamlessly using OpenAPI specs. This makes it easy for distributed teams to develop complex micro-services without having to wait for full development cycles to complete, thus maximising team efficiencies.\nApicurio Studio is another great tool to start creating your API documentation via a fully integrated OpenAPI spec editor and adds features like ability to view your documentation live as teams collaborate and edit specs on the editor in real-time.\nA cool feature in Apicurio is the ability to integrate seamlessly with Microcks to mock the API definition with just a single click. This yields to a great developer experience overall as clients can start consuming mock endpoints with sample responses even if the actual API is going through the CI pipeline and yet to reach staging or production environments.\nHowever, the latest version of Microcks (version 1.1.0 as of writing this post) doesn\u0026rsquo;t work if we follow the Apicurio docker-compose installation. One of the main reason being the Keycloak realm in the Apicurio installation is not uptodate for the changes made in Microcks, especially with the missing \u0026ldquo;user\u0026rdquo; role in the microcks-app client in the Keycloak realm setting.\nAs of publishing this article, I proposed a Pull Request to fix and ease this setup. Event if the steps described below might no longer be necessary, this post helps understand how things are supposed to work and the elements to check in case of troubles 😉\nAlso for users who just want to take the installation for a spin on localhost, may face the issue with SSL being a pre-requisite to using Keycloak with Apicurio. I strongly RECOMMEND to ensure that you have TLS setup for anything in production, but I will provide steps to overcome this limitation for setting up Apicurio and Microcks in development environments.\nDownloading and getting ready with the Apicurio setup The steps to setup Apicurio are similar to the steps mentioned in their GitHub docker-compose readme page. As of writing this article, Apicurio is at version BETA 2.46\nClone the Apicurio repository in a convenient location\ngit clone https://github.com/Apicurio/apicurio-studio.git Cd to the directory to enter the Apicurio docker-compose installation workspace. Its now time to make a few edits\ncd apicurio-studio/distro/docker-compose Setting up the correct realm configuration in Keycloak Once your inside the docker-compose workspace. Make the following edits:\nReplace the Keycloak realm configuration with the correct one from Microcks repository. Start by changing to the config directory.\ncd config/keycloak Download the copy of the correct Keycloak realm file from Microcks repository.\nwget https://raw.githubusercontent.com/microcks/microcks/master/install/keycloak-microcks-realm-full.json Rename the existing Microcks realm file to something different.\nmv microcks-realm.json microcks-realm.json.bkup Rename the realm file you downloaded above to microcks-realm.json.\nmv keycloak-microcks-realm-full.json microcks-realm.json These steps will ensure that you have the correct realm configuration to start the installation. Once done, follow the remaining instructions as-is in the Apicurio docker-compose readme here https://github.com/Apicurio/apicurio-studio/blob/master/distro/docker-compose/Readme.md\nEnsuring the \u0026ldquo;user\u0026rdquo; role is present correctly in the microcks-app client Once your installation is up and running, login to Keycloak with your admin credential and follow the following steps:\nClick on Client -\u0026gt; microcks-app.\nClick on Roles tab and check to confirm if the \u0026ldquo;user\u0026rdquo; role set correctly. If the role is not present, just create one using the \u0026ldquo;Add Role\u0026rdquo; button and give the name of the role as \u0026ldquo;user\u0026rdquo; and press \u0026ldquo;Save\u0026rdquo;.\nNow, click on Client again and move to microcks-app-js.\nHere click on \u0026ldquo;Scope\u0026rdquo; and ensure Full Scope allowed is \u0026ldquo;ON\u0026rdquo;\nCreating users in Microcks Keycloak realm Now go ahead and create a user in Microcks realm. Once a user is created, Follow the steps below to ensure that the user is setup correctly.\nEnsure that the user has a role called \u0026ldquo;user\u0026rdquo; in the client role section under microcks-app client. For checking this, click on the user in the Users page and navigate to the Role Mappings tab. Here in the Client roles drop-down select microcks-app. You should see the an entry called \u0026ldquo;user\u0026rdquo; in Assigned Roles and Effective Roles sections (both should have it) Once this confirmed. Navigate to Clients -\u0026gt; microcks-app-js Here, click on the Client scopes and then Evaluate In the user input, enter the name of the user whom you want to check and then click \u0026ldquo;Evaluate\u0026rdquo; In the form that pops up up below, click on \u0026ldquo;Effective Role Scope Mapping\u0026rdquo; Here, under Client roles drop down, select microcks-app. You should see the \u0026ldquo;user\u0026rdquo; role in the Granted Effective Client Role section. Logging in to Microcks ! Now login to Microcks app and mock away !!! You should see any APIs that you posted from Apicurio or specs that you manually uploaded using the Importers section, come up correctly in the Dashboard and the APIs | Services section\nSkipping TLS !!!! Dragons ahead !! BEWARE, at no point should this be done for production environment. TLS is one of the first steps to ensuring a strong and secure environment for the tools we are working with and at no point should you disable SSL-required configuration in Keycloak. This is ONLY for development purposes. Also this change is ONLY required when you make external calls to these services. If you provide your Apicurio IP in the range of 192.168.\\*.\\* or 127.0.0.1 or localhost you can just set the SSL-required setting in your Keycloak realm\u0026rsquo;s \u0026lsquo;Login\u0026rsquo; setting to false (or OFF as in the UI console).\nFor those independent developers and coders/hobbyist, use services like Let\u0026rsquo;s Encrypt to get your free TLS certs to use with these services.\nNow for the steps:\nStop your docker-compose installation of Apicurio.\ndocker-compose -f \u0026#34;\u0026lt;name of the compose files\u0026gt;\u0026#34; down Download the application.properties file into you Apicurio\u0026rsquo;s docker-compose config folder as shown below\ncd docker-compose/config wget https://raw.githubusercontent.com/microcks/microcks/master/install/docker-compose/config/application.properties wget https://raw.githubusercontent.com/microcks/microcks/master/install/docker-compose/config/logback.xml Comment \u0026lsquo;all\u0026rsquo; the lines in application.properties except the following lines and change them as shown below:\nsecurity.require-ssl=false keycloak.ssl-required=false Restart your Apicurio installation. The Keycloak system and the RequestAuthenticator class will not complain of SSL-required for external requests now. You can check in the docker logs as well.\nI hope this blog post provides an interim solution to the Apicurio-Microcks installation woes. I have raised a bug and proposed a Pull Request for the permanent fix for this to the Apicurio team. Until then, Happy Mocking !!\n"},{"section":"Blog","url":"https://microcks.io/blog/apache-kafka-mocking-testing/","title":"Apache Kafka Mocking and Testing","description":"Apache Kafka Mocking and Testing","searchKeyword":"","content":"We see Apache Kafka being more and more commonly used as an event backbone in new organizations everyday. This is irrefutable. And while there are challenges adopting new frameworks and paradigms for the apps using Kafka, there is also a critical need to govern events and speed-up delivery. To improve time-to-market, organizations need to be able to develop without waiting for the whole system to be up and running ; and they will need to validate that the components talking with Kafka will send or receive correct messages.\nThat’s exactly what Microcks is sorting out for Kafka event-based APIs! For that, we’re taking advantage of the AsyncAPI specification. This blog post is the follow-up of Microcks 1.1.0 release notes and will guide you through the main usages of Microcks for Apache Kafka.\nMocking Kafka endpoint Let’s start with mocking on Kafka. This first mocking part was already introduced with release 1.0.0 but that is worth mentioning so that you’ll have the full picture 😉 When importing your AsyncAPI specification into Microcks, you end up with a new API definition within your API catalogs. This is the overview of an Event typed API.\nWhen using the Kafka capabilities of Microcks for the protocol binding of this API, you will see the highlighted informations appear on the API definition:\nThe available bindings and the dispatching frequency: that means the time interval Microcks is publishing mock messages on Kafka - every 10 seconds below, The Kafka broker / endpoint Microcks is connected to. Microcks can have its own broker that is deployed alongside or reuse an existing one, The Kafka topic that is used by Microcks for publishing sample messages. Just import your AsyncAPI specification and you’ll have incoming sample messages on the specified topic for the configured Kafka broker! Without writing a single line of code! You can then immediately start developing an app that will consume these messages.\nImagine you have now developed a simple consumer that listens to this UsersignedupAPI_0.1.1_user-signedup topic and just displays the messages on the console. You will ended up with following results:\n// At startup time... { \u0026#34;id\u0026#34;: \u0026#34;tSWj3wp68S5w2D78NFe6EcbLF6vsGKRJ\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659425835\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;id\u0026#34;: \u0026#34;LgAqKSJwooo5YStRjt2273lOC8UYXGid\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659425836\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } // ...then 10 seconds later... { \u0026#34;id\u0026#34;: \u0026#34;VV6OSh4LkGYgymjIJOoggJ1BSS89AvEK\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659435834\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;id\u0026#34;: \u0026#34;jabbeE3PBmbhXALKkVxwojIF2bSREDWr\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659435835\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } // ...then 10 seconds later (you got it ;-)... { \u0026#34;id\u0026#34;: \u0026#34;AnVrWCQyFzHQJyji3aqIe7rXC06sYQtX\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659445834\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;id\u0026#34;: \u0026#34;Y5Lh4ryHgVERYNvqw0IIzCQDiyqSqfpW\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1604659445835\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } // ...until you kill your consumer... Thanks to Microcks message templating, you see that you receive different message ids each and every time.\nThe new thing in the Microcks release 1.1.0 is the little green-and-red bar chart in the upper right corner of the screen capture. That’s where you can launch tests of your Kafka event-based API. Let’s see what it means.\nTesting Kafka endpoints In Microcks, testing Kafka endpoints means: connecting to a remote Kafka topic on an existing broker in the organisation, listening for incoming messages for a certain amount of time and checking that received messages are valid regarding the event-based API schema.\nFor defining such a test, you will need to specify:\nThe Test Endpoint that is expressed using this simple form: kafka://host[:port]/topic A waiting timeout and an optional Secret that will handle all the credentials information to connect to a remote broker (think of user/password or certificates). Such Secrets are managed by administrators and users just reference them at test launch. In 1.1.0 release we only deal with JSON Schema describing message payload but we plan to include Avro Schema support in next releases. For more details, see the Test Runner documentation.\nMicrocks is able to launch tests asynchronously, to collect and store results and then give a restitution of the test results as well as the received messages. See the failed test below: received message triggered a validation error because the sendAt property was not of the expected type.\nEven if it may be handy to launch tests manually for diagnostic or evaluation purposes, we recommend triggering tests automatically from your CI/CD pipeline. Microcks provides a CLI and some other options for that.\nSummary In this walkthrough, you have learned how Microcks is leveraging the AsyncAPI to provide helpful information on your event-based API. Moreover it can reuse all the elements of your API specification to automatically simulate a Kafka provider and then validate that your application is producing correct messages!\nWe have seen how easy it is to manually launch tests from the Microcks console even if you’ve deployed your Kafka broker in a secured context with credentials and certificates. Stay tuned for the next post where you will learn how to automate these tests from your CI/CD pipeline. We’ll also demonstrate how AsyncAPI and OpenAPI can play nicely together through a full sample application available on our GitHub repo.\nTake care and stay tuned. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.1.0-release/","title":"Microcks 1.1.0 release 🚀","description":"Microcks 1.1.0 release 🚀","searchKeyword":"","content":"We are very thrilled to announce today Microcks 1.1.0 release — the Open source Kubernetes-native tool for API Mocking and Testing. What a ride it has been over the last months since 1.0.0 release and our announcement of AsyncAPI support !\nWe received a huge amount of positive feedback from our community including many newcomers. So we took the time to come back and explain where we are coming from and what is our project purpose: see the \u0026ldquo;Why Microcks ?\u0026rdquo; post. But above all, we wanted to go further and complete what had been started in the previous version by adding Apache Kafka event-based API testing support.\nToday, Microcks is the only Open source Kubernetes-native tool that offers a consistent approach for mocking and testing your REST APIs, SOAP WebServices and now asynchronous / event-driven Kafka APIs !\nSo you may be wondering \u0026ldquo;Why is this new release so fantastic and important ?\u0026rdquo;. Well that 1.1.0 release means that you may now use the same tool for speeding-up the delivery and governing the lifecycle of your APIs - whether synchronous or asynchronous. Microcks will open up avenues for your team to test and create robust asynchronous workflows the easy way.\nFor those of you who needed to see some samples in action, please stay tuned! Here\u0026rsquo;s the follow-up post with details on how we\u0026rsquo;re doing stuffs here 😉\nMocking enhancements Aside from the major new feature around Kafka testing, we also deliver significant enhancements on the OpenAPI mocking and complete the support of all the specification details. Response references as well as parameter examples in references are now fully supported.\nCheck our documentation for OpenAPI support here\nNot surprisingly, Microcks turning GA generates a lot of attention from users managing big SOAP WebServices patrimony who wanted to mock them with Microcks. Dealing with legacy is always the opportunity to discover tricky cases or ambiguous interpretation of a standard. So we discovered and fixed some issues around SOAP operation, like discover, empty body support or complex WSDL with multiple interfaces management.\nAll of them have been fixed and tested based on community real life samples. So thanks a lot to all of you 🙏 - @ivsanmendez, @sahilsethi12, @bthdimension - who helped in a very collaborative way.\nTesting enhancements For the need of asynchronous API testing features, we introduced a new testing strategy called ASYNC_API_SCHEMA. When testing an event-based API with this strategy, Microcks will try to validate the messages received on the connected broker endpoints against a schema found in the AsyncAPI specification.\nIn 1.1.0 release we only deal with JSON Schema for describing message payload but we plan to include Avro Schema support in next releases. For more details, see the Test Runner documentation.\nUnderstanding testing value is usually not easy to evaluate and mainly because it implies having a functional application at hand. We have had a lot of comments on this point from community users and we have decided to provide a sample application from our famous API Pastry sample. So now, you can have a true application at hand to evaluate OpenAPI testing 🥳\nRun it in less than 100 ms and use a single command line: thanks to Quarkus !\n$ docker run -i --rm -p 8282:8282 quay.io/microcks/quarkus-api-pastry:latest __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,\u0026lt; / /_/ /\\ \\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2020-10-19 14:49:37,134 INFO [io.quarkus] (main) quarkus-api-pastry 1.0.0-SNAPSHOT native (powered by Quarkus 1.7.1.Final) started in 0.104s. Listening on: http://0.0.0.0:8282 Find our Getting Started with Tests quick start guide.\nInstallation experience We also added a very nice enhancement to improve your installation experience: the ability to put annotations on the Ingress resources, whether you choose to use Helm Chart or Kubenetes Operator. This sounds like little details but it clearly reinforces smooth integration of Microcks into the Enterprise ecosystem, allowing you to reuse ingress controllers specific features or integrate with organization PKI through CertManager.\nYou can find full documentation on Microcks Helm Chart on the README ; as well as on the Kubernetes Operator README. Helm Chart is still distributed through Helm Hub and Operator through OperatorHub.io\nWhat’s coming next ? We still have a lot to accomplish but cannot do it without your support and ideas. Please use GitHub (issues) to tell us the enhancements or new features you are dreaming about using.\nWe are open and you can help make Microcks an even greater tool ! Please spread the word, send us some love through GitHub stars and follow us on Twitter.\nTo support the growth of our community we also wanted to enhance the way we interact and we have decided to abandon Gitter chat rooms in favor of Discord. Discord offers streams and topics that will ease following each and every thread we’ve got on Gitter. It also provides a better mobile experience so you will be able to follow us on the move.\nThanks for your reading and for supporting us ! Take care in these particularly weird moments and stay tuned. ❤️\n"},{"section":"Blog","url":"https://microcks.io/blog/why-microcks/","title":"Why Microcks?","description":"Why Microcks?","searchKeyword":"","content":"Microcks recently reached a key milestone as we officially announced on Aug 11th 2020 the release of Microcks 1.0.0, being our first General Availability (GA) version. With it we deliver the promise of providing an enterprise-grade solution to speed up, secure and scale your API strategy for the digital era — whatever the type of services or API.\nAs we have received massive supportive feedback since August, we consider it a great opportunity to take some time to come back to the reasons why we started Microcks, especially for the newcomers. Surprisingly enough, we explain a lot why mocking and testing are necessary in today’s cloud-native area - see Mocking made easy with Microcks - but do not spend that much time on why we were not satisfied with existing solutions.\nSo here’s a little refresher that will give you insights on why we started Microcks ? We’ll develop this through three main concerns.\n#1 Business requirements without translation One huge problem in software development is the translation mismatch we usually face between business requirements and product release - you don’t learn anything new here isn’t it ? Business Lines people usually produce some spec documents that are translated into software packages, API contracts and so on. These one are then put into a Git repository and thrown away to the CI/CD pipelines or staging and release process.\nFig 1: Specifications produced as documents are “translated” into software packages and API contracts. Translation leads to drifts from initial expectations. As agile and DevOps practices - like CI/CD, mocking and continuous testing - tend to become mainstream, the feedback loop is getting shorter. However even the mocks and the tests are suffering from this translation mismatch !\nExisting tools that propose writing code for mocks are contributing to this mismatch. Sure, they are helpful because they are lightweight and easy to start with. But at the end of the day, you have no guarantee that what was written is actually the perfect translation of the business expert knowledge.\nAt Microcks we were thinking of using the concepts of example-driven design and executable specification to help define API and microservices contracts. Those concepts are both simple and powerful: just express your specification as examples - in the case of API and services it means request and response examples - and reuse them as the acceptance rules for produced software. We saw it as a way that allows Business Line experts and Developers to collaborate and produce a contract definition ; eliminating the translation and the drift risk.\nFig 2: Specifications produced as examples within API contracts represent the “source of truth”. It eliminates drifting risks. Sure software code still has to be written for implementing the behaviour but provided examples will allow to provide fully accurate mocks faster so that dependant consumers may start playing with the API immediately. From these examples, we are also able to deduce a comprehensive test suite that will validate the implementation when ready.\nAt the time we investigated first Microcks prototypes in early 2015, a bunch of standards and toolings arose that would be of great help making these ideas real. Supporting standards was a no-brainer for us and luckily enough the OpenAPI and AsyncAPI specifications were handling examples ! We saw it as the confirmation of the crucial role of examples as we foresaw it. We were also truly convinced that toolings had a great role to play to foster collaboration between personas. So we extended the range of possibilities and now Microcks supports all these formats as contract definition.\nFig 3: Supported standards and tools in Microcks. So at first sight, Microcks is a tool that follows example-driven designs to build mocks and tests from standard specifications and collaborative design toolings. But there’s more \u0026hellip;\n#2 Scaling the practice with less resources \u0026amp; more efficiency Our second concern - and thus the reason for starting Microcks - was about scaling the practice of mocking and contract testing. When things are growing up and you want to apply those practices in many applications or at a large organization level, you start encountering many new issues ! Since we have entered a cloud-native era where APIs, microservices and event-driven architecture are all the rage, the growth and troubles are now a reality.\nFrom our experience, the following questions arose very rapidly :\nHow to share contracts and mock definitions so that everyone uses the same set of definitions for the same APIs ? How to limit the resources dedicated to mocking ? If everybody is popping dedicated services for mocking, you could have a lot of resources used just for mocking, How to keep everything up-to-date and in sync, avoiding spending time refreshing definitions in different places ? How to embrace the diversity of technologies ? Conciliating green-field APIs and the legacy WebServices we usually build our new API on top \u0026hellip; We face these challenges on a day-to-day basis working with companies that have to deal with hundreds or even thousands of API and microservices across their whole organisation.\nMost of the existing tools propose running the mock services on the developer laptop or within the CI/CD pipeline. This leads to many developers running a lot of short-lived de-synchronized mocks locally. Imagine building an application with a dozen dependencies and a team of a dozen developers. This made more than 100 mocks to configure, run and keep up-to-date as the development sprints are coming. This model is simply not viable at scale!\nFig 4: Running mocks on developer’s laptop or build servers imply synchronization efforts and a lot of consumed resources. This model is not scalable. We were looking for a scalable model with no risk of having out-of-sync mocks with later changes. That’s why we built Microcks using a platform approach. In an organisation, Microcks can be deployed centrally and connected to the various Git repositories. It will take care of discovering and syncing contract definitions for your APIs and provide always up-to-date endpoints mocking the last committed changes. It will also keep the history of all previously managed and deployed versions of your APIs and services - and thus help with their governance and natural referencing.\nFig 5: Microcks is central, lightweight, always-in-sync with API contracts in Git and provides always-up and scalable mocks. For Microcks we wanted a fully dynamic mocking model: you don’t need to generate nor re-deploy artifacts or packages when updating your interface or datasets. It provides a powerful matching engine to find correct answers for incoming mocking requests whilst consuming few resources. It also provides API and CI/CD engine integration for launching compliance tests when implementations are ready. And of course, these features are available for all the types of API and services within the organization: REST API, SOAP WebServices and Event-based API that are using Apache Kafka or some other message brokers.\nThe platform approach of Microcks solves many of the issues that come with maturity and expansion of the API mocking and testing practices. You may think it brings some constraints in the way you operate it or the location you deploy it \u0026hellip; Let’s see that in the next section.\n#3 Everywhere \u0026amp; automated One of our strong belief - whilst we entered the cloud-native era - was that the advent of API would be global to all industries. However all of them will have different cloud adoption strategies. As such, organisations will need API and services mocking and testing capabilities on public cloud as well as on-premises infrastructures for legacy / regulatory / security concerns.\nDespite being a “platform” Microcks could not impose any deployment location. Also the hybrid nature of cloud adoption will certainly drive multiple Microcks instances to segregate contract definitions per Business Unit / visibility scope / security zones or other criteria. We surely need a deployment model providing flexibility as well as ease of operations.\nWithin our team we’re early adopters of containers and Kubernetes. So the choice was natural to make Microcks Kubernetes-native from day 1. But we do not just “run on Kubernetes” ; we integrate all the ecosystems like Operators, Helm, Autoscalers and so on to provide you the easiest and automated operational experience.\nFig 6: Deployment options for Microcks: on-premises or on the cloud. Microcks relies on Kubernetes as the abstraction layer of infrastructure and thus gives you the choice of deployment location. Whether on public cloud providers managed services or on in-house Kubernetes distribution, you’ll be able to deploy and scale Microcks. And you’ll be able to do that easily, repeatedly, with a very low resources footprint and in a fully automated way.\nMicrocks does mocks differently! As a wrap-up of this “Why Microcks?” manifesto, we’d like you to remember this definition: Microcks is an Open Source Kubernetes-native tool for API Mocking and Testing. It provides an enterprise-grade solution to speed up, secure and scale your API strategy for the digital era.\nIt is “simply” doing API mocking and testing but differently:\nIt promotes collaborative example-driven design principles : you do not write code, your Business experts just describe examples as we believe in the true value of real-life samples with no translation in-between, It supports open standards for contract definitions and also support mainstream open collaborative tools : it do not impose you a design process nor tooling and foster reuse of existing assets, It provides efficient, resource effective dynamic mocking capabilities that solve the synchronisation and governance issues your organisation will face at scale, It embraces all the different technologies that are REST, SOAP and event-based APIs. It is not just a tool for the latest trendy API style. It offers a consistent approach whatever the type of API, It can be deployed easily on-premises as well on all the major cloud providers managed services. Thanks to Kubernetes and Operators it provides easy and automated operational experience. Last but not least, Microcks is fully Open Source and community driven. So jump in if you’re interested in it!\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-1.0.0-release/","title":"Microcks 1.0.0 release 🚀","description":"Microcks 1.0.0 release 🚀","searchKeyword":"","content":"Today is a very special day as we launch Microcks 1.0.0 and as it materializes a vision we had 18 months ago when starting investing more time on what was just a fun side-project at that time. That vision was about building one simple, scalable and consolidating tool for all the Enterprise services mocking and testing needs — whatever the type of services or API: green-field or legacy. Today, Microcks is the only Open source Kubernetes native tool for API Mocking and Testing supporting REST APIs, SOAP WebServices and now asynchronous / event-driven APIs!\nThis new 1.0.0 release is the first Microcks General Availability (GA) version to fully manage event-driven API through the support of AsyncAPI specification. This is a major step forward as we are convinced that the transition to cloud-native applications will strongly embrace event-based and reactive architecture. Thus the need to speed-up and govern event-based API like any other services mocking using Microcks will be crucial and a key success factor for any modern and agile software developments.\nSo thanks to the AsyncAPI and Microcks communities feedbacks, we unleashed this new release that really demonstrates the flexibility of Microcks:\nIt can be installed on-premise or on your favorite cloud provider, It is extremely scalable and efficient to support a huge amount of business-critical API definitions as we seen in medium to very large organisations (hyperscalers are welcome), It is lightweight and fully automated to manage local and ephemeral use-cases in order to cover complex environment simulation or performance testing, It helps teams communicate by publishing their intents while gathering rapid feedback using API designs: which make Microcks the perfect tool for designers, providers and consumers to easily iterate whether you are already using microservice, serverless or not. With this release we mainly focused on event-driven capabilities and finalizing the security enhancements we started on 0.0.9. Let’s do a review on what’s new.\nAsyncAPI support AsyncAPI is an Open source initiative that seeks to improve the current state of Event-Driven Architectures (EDA). Its long-term goal is to make working with EDA’s as easy as it is to work with REST APIs. That goes from documentation to code generation, from discovery to event management. Most of the processes you apply to your REST APIs nowadays would be applicable to your event-driven/asynchronous APIs too. So it clearly makes sense for Microcks to support AsyncAPI too, isn’t it 😀!\nStarting with version 1.0.0, Microcks is now able to import AsyncAPI definitions for enriching the API catalogs with Event typed APIs.\nAsyncAPI defines multiple protocol bindings to details protocol specific issues. In this 1.0.0, we have decided to focus on the KAFKA binding. Microcks installation procedure now offers to deploy a dedicated Apache Kafka broker as part of your setup or to reuse an already existing broker.\nMocking events on Kafka with Microcks is now super easy! When set up accordingly, it is also able to mock the API by publishing sample messages on a dedicated topic. See this video below for a full demonstration.\nSecurity enhancements Since a few releases Microcks is already following the “TLS everywhere” principles but as security really matters : it was time to update some obsolete dependencies… We have focused here on three main topics: components updates, Keycloak infrastructure reuse and container images vulnerabilities.\nMicrocks internal components have all received major updates to remove any discovered threats and vulnerabilities.\nFrontend part was bumped from Angular 6.1 to Angular 8.1 with all dependencies upgraded, Backend part was bumped from Spring Boot 1.5.17 to Spring Boot 2.2.5 with all dependencies upgraded, Keycloak server was bumped from 4.8.3 to 10.0.1. Many users from the community also asked for enhancements when reusing an existing Keycloak infrastructure. They were accustomed users of Keycloak, have some complex setup on their realm and want to add Microcks support with no collisions with their existing configuration. So we review our configuration and setup options to be able to integrate with no impact into a Keycloak infrastructure.\nMore details here: https://github.com/microcks/microcks/issues/237\nFinally, we did move our container images repositories from Docker Hub to Quay.io infrastructure. The major reason for moving to Quay.io is their excellent, built-in security vulnerabilities scan for container images. Now, for each and every commit into the Microcks repository, newly produced container images are scanned and trigger a notification if a vulnerability is found.\nAll the latest images from Microcks now have an “All green” scan report ;-)\nYou can now find all the Microcks container images and their security scan reports from the same location: https://quay.io/organization/microcks. Check here the status of latests images.\nWhat’s coming next? As you read this post, you have seen that there’s some huge new features in this 1.0.0 release, in just four months since the previous one. Sure we did not include all what we had in mind but did put the efforts on the topics that matter the most based on community feedback: kudos to all our users, contributors and friends (special thanks to the AsyncAPI team for listing us in their tooling ecosystem)\nWe still have a lot to accomplish but cannot do it without your support and ideas: tell us the enhancements or new features you are dreaming about using GitHub issues.\nWe are open and you can help make Microcks an even greater tool! Please spread the word, send us some love through GitHub stars, follow us on Twitter, send us Gitter messages or — even better — blog posts or tweets and tell us how you use Microcks.\n"},{"section":"Blog","url":"https://microcks.io/blog/install-microcks-on-aws/","title":"Install Microcks on AWS for a test drive 🧪","description":"Install Microcks on AWS for a test drive 🧪","searchKeyword":"","content":"Whilst we recommend installing Microcks on Kubernetes for easy management and enhanced capabilities, it can also be deployed onto a regular Virtual Machine. This post details how you can setup Microcks onto an AWS EC2 instance if you’re familiar with this environment and want a quick test drive. It takes something like 6–7 minutes to complete from end-to-end. It’s an illustration of setup documentation using Docker Compose.\nThis will give you a Microcks installation on an AWS EC2 instance running Ubuntu 18.04 LTS :\nall-in-one install : Microcks, Keycloak and MongoDB on the same box, local storage mounted through Docker volumes, self-signed certificates for rapid testing. EC2 instance launch First step is— of course — to order a new EC2 instance at Amazon. We will use a t2.small instance running Ubuntu Server 18.04 LTS as shown below but Microcks will work with any Linux distro, so feel free to use the one that suits you best. One requirement is to add new custom TCP rules into network security group:\nPort 8080 should be reachable as it is the main port used by Microcks, Port 8543 should also be reachable as it is used by Keycloak. The above screenshot highlights our test configuration elements. Configure your access method with your favorite SSH key and launch your instance!\nMicrocks setup Once your instance is up and running, you will need its public hostname at hand for setup. Just SSH into the running VM, clone the https://github.com/microcks/microcks repository and use the setup-microcks-apt.sh script we provide.\nThe terminal session above details all these commands and steps. Instance is first updated with docker tooling, self-signed certificates are generated and configuration files are tuned. Microcks is finally launched using docker-compose.\nMicrocks first startup Once Microcks is running on the EC2 instance, you just have to open your favorite browser and finish the application first login. The video below illustrates this using the default admin user with 123 password (that you should immediately change).\nAnd here we are! In no time! Ready to go and to import some OpenAPI contracts, Postman Collection or SoapUI Projects within the installation. And pretty soon AsyncAPI! Have a look at Getting Started using Microcks for importing samples.\nDo you think Microcks as a Service directly from AWS Marketplace would be of great help for you or your team? Please let us know what you think voting and commenting on this issue.\nHappy mocking ;-)\n"},{"section":"Blog","url":"https://microcks.io/blog/microcks-0.9.0-release/","title":"Microcks 0.9.0 release 🚀","description":"Microcks 0.9.0 release 🚀","searchKeyword":"","content":"I am delighted to announce Microcks release 0.9.0 — the Open source Kubernetes native tool for API Mocking and Testing. This new version introduces a tremendous amount of enhancements and new features.\nBig thanks to our growing community for all the work done, the raised issues and the collected feedback during the last 5 months to make it possible.\nThis release was the preparation to become more Enterprise-grade, and we are glad that Microcks is in production in more and more medium to large organisations. They use it to manage different use cases and sort out some business critical APIs life cycle management and development pains.\nSo we worked a lot on installation and managements features but also on some noticeable enhancements on existing core features. Let’s do a quick review on what’s new in this release!\nInstallation experience First contact with a new solution comes usually from the installion process itself and we care about user experience to make your life easier ;-)\nMicrocks is now available on Helm Hub and has its own Chart repo. So installing Microcks via Helm is just 2 commands:\n$ helm repo add microcks https://microcks.io/helm $ helm install microcks microcks/microcks —-version 0.9.0 --set microcks.url=microcks.$(minikube ip).nip.io,keycloak.url=keycloak.$(minikube ip).nip.io More details here: https://hub.helm.sh/charts/microcks/microcks\nMicrocks Operator is available on OperatorHub.io and has been upgraded to version 0.3.0 and now managed the Seamless Upgrade as defined by the capability model:\nWhile this version is still tagged as Alpha (till we reach the Full Lifecycle capability level at least), it is already in production on many Kubernetes clusters and has been reported as rock solid by community users.\nMore details here : https://operatorhub.io/operator/microcks\nOpenShift Templates have been created for OpenShift 4.x, upgrading some components and removing cluster-admin privileges that were mandatory so far.\nIt’s now easier to install it in your own project without requiring security operations at the cluster level.\nWhether your are using Helm or Operator to install Microcks, we have introduced some new and useful options : reusing existing Keycloak or MongoDB instances, reusing secrets for credentials, reusing TLS certificates for ingress security.\nThese options let you reuse already existing and shared services that you may have provisioned with your favorites options, allowing a better integration with your Enterprise ecosystem.\nAs security matters and it is one our top priority : TLS is now the default for each setup method — we’ll generate auto-signed certificates for you if none provided. On the packaging side, we also released a new container image that is now based on Red Hat Universal Base Image.\nThis led to a lightweight image — we reduced the size from 240 MB to 160 MB — and much more secure as the UBI has a reduced attack surface and is very frequently updated and patched.\nManagement features As an administrator you’ll need effective way to manage users and repository access rights.\nMost of the features were already existing but not documented nor easily accessible, so we fixed that. You’ll find documentation on:\nHow to manage your users assigning them application roles, How to define secrets for your Enterprise repository such as Git ones, How to snapshot and restore your repository content. As a repository content manager, we add new features regarding repository organization. With this new release, you’ll now be able to assign labels to your API or services. This offer you a lot of flexibility to categorize and organize your repository the way you would like.\nLabels can also be used on the main repository page allowing you to filter and display the most important labels when browsing repository content.\nFull details are documented here: https://microcks.io/documentation/guides/administration/organizing-repository/\nMocking enhancements The mocking engine of Microcks did receive some enhancements too!\nThe more noticeable being now the ability to generate dynamic response content. We still do think and stick to the idea that non generated samples are of real value… but this was a recurrent community request and we finally listened and change our mind a bit ;-)\nSo now, you can use variables references and functions to describe dynamic results that helps to simulate real expected behavior, for example:\n{ \u0026#34;id\u0026#34;: \u0026#34;{{ randomString(64) }}\u0026#34;, \u0026#34;date\u0026#34;: \u0026#34;{{ now(dd/MM/yyyy) }}\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Hello {{ request.body/name }}\u0026#34; } Upon invocation, the mock engine will use this template and interpret the expressions between double-mustaches ({{ and }}).\nSee full details documented here: https://microcks.io/documentation/references/templates/\nWe also added some nice documentation enhancements like Content-type negociation, Parameters constraints and Custom dispatching rules.\nTesting enhancements We also introduce Tekton support for bringing Microcks to this great new Kubernetes-native CI/CD tooling. We do provide Tekton tasks and pipeline samples that allow you to integrate Microcks Tests steps with-in your pipelines.\nHere an OpenShift 4.x example:\nWe also bring the capability of overriding headers during tests for better integration with tested endpoint environment.\nWhat’s coming next? So you have seen there’s definitely a lot of enhancements in this 0.9.0 new release!\nThat’s just a start as we are going to tackle some big topics for the 1.0.0 release and will love your feedback and comments on our roadmap priorization:\nSupport of AsyncAPI standard for the mocking of event-driven API, Refinement of Role Based Access Control model to allow segmentation and delegation of management of different repository parts, Launch our API Mock Hub dedicated public market place to promote Microcks ecosystem, use cases, ready to use mocks and partners. So stay tuned!\n"},{"section":"","url":"https://microcks.io/community/","title":"Microcks community and ressources","description":"Microcks community and ressources","searchKeyword":"","content":""},{"section":"","url":"https://microcks.io/discord-invite/","title":"Microcks Discord Invite","description":"","searchKeyword":"","content":""},{"section":"","url":"https://microcks.io/documentation/","title":"Documentation","description":"Microcks documentation","searchKeyword":"","content":"\rWelcome to the Microcks documentation! It is intended to serve as a reference resource for people that discover Microcks or users who already have some familiarity with it and want to learn more.\nYou may find here tutorials, guides, reference materials as well as explanations to help you during your learning journey. Our documentation is organized following the principles of the Diátaxis methodology that has this idea of a cycle of documentation:\nFurthermore, we welcome contributions to foster a collaborative environment. You can actively participate by suggesting improvements, reporting errors, or adding new content. This collaborative approach not only keeps the documentation up-to-date but also encourages a sense of ownership and community engagement, making the project stronger and more reliable over time.\nBy the community, for the community 🙌\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/openapi-conventions/","title":"OpenAPI Conventions","description":"","searchKeyword":"","content":"Conventions In addition of schema information, Microcks uses OpenAPI Example Objects to produce working mocks and build test suite for validating your implementation.\nAs example fragments are actually distributed along the OpenAPI specification, Microcks collects fragments and try to associate them by name. Microcks only takes care of comprehensive request/response examples - which means that if you provide examples for input elements (parameter, requestBody) but not for output (response), incomplete examples will be discarded.\nIllustration The Cars sample. It is a simple API that allows registering cars to an owner, listing cars for this owner and adding passenger to a car. Within this sample specification, we have defined 2 mocks - one for the registering operation and another for the listing cars operation:\nThe POST /owner/{owner}/car operation defines a sample called laurent_307 where we\u0026rsquo;ll register a Peugeot 307 for Laurent, The GET /owner/{owner}/car operation defines a sample called laurent_cars where we\u0026rsquo;ll list the cars owned by Laurent. Specifying request params Specifying request params encompasses path params, query params and header params. Within our two examples, we have to define the owner path param value - one for laurent_307 mock and another for laurent_cars mock.\nPath parameters This is done within the parameters part of corresponding API path, on line 83 of our file:\nparameters: - name: owner in: path description: Owner of the cars required: true schema: format: string type: string examples: laurent_cars: summary: Value for laurent related examples value: laurent laurent_307: $ref: \u0026#39;#/components/examples/param_laurent\u0026#39; One thing to notice here is that Microcks importer supports the use of references like '#/components/examples/param_laurent' to avoid duplication of complex values.\nQuery parameters Query parameters are specified using parameters defined under the verb of the specification as you may find on line 20. Snippet is represented below for the laurent_cars mock:\n- name: limit in: query description: Number of result in page required: false schema: type: integer examples: laurent_cars: value: 20 Specifying request payload Request payload is used within our laurent_307 sample. It is specified under the requestBody of the specification as you may find starting on line 55. Request payload may refer to OpenAPI schema definitions like in the snippet below:\nrequestBody: description: Car body content: application/json: schema: $ref: \u0026#39;#/components/schemas/Car\u0026#39; examples: laurent_307: summary: Creation of a valid car description: Should return 201 value: \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;307\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;Peugeot 307\u0026#34;, \u0026#34;year\u0026#34;: 2003}\u0026#39; required: true Specifying response payload Response payload is used within our laurent_cars sample. It is defined under the Http status of the specification as you may find starting on line 40. Response payload may refer to OpenAPI schema definitions like in the snippet below:\nresponses: 200: description: Success content: application/json: schema: type: array items: $ref: \u0026#39;#/components/schemas/Car\u0026#39; examples: laurent_cars: value: |- [ {\u0026#34;name\u0026#34;: \u0026#34;307\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;Peugeot 307\u0026#34;, \u0026#34;year\u0026#34;: 2003}, {\u0026#34;name\u0026#34;: \u0026#34;jean-pierre\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;Peugeot Traveller\u0026#34;, \u0026#34;year\u0026#34;: 2017} ] No content response payload Now let\u0026rsquo;s imagine the case where you\u0026rsquo;re dealing with an API operation that returns \u0026ldquo;No Content\u0026rdquo;. This could by - for example - an operation that takes care of deleting a car from the database and return a simple 204 HTTP response code once done.\nIn that case, we cannot rely on Example Objects because the response has typically no content we can attach an example to. We need another way to specify the matching of this response with an incoming request. For this, we introduced a specific x-microcks-refs extension that allows to tell Microcks on which requests it should match this response.\nLet\u0026rsquo;s illustrate the above-mentioned case with this snippet below:\n/owner/{owner}/car/{car}: delete: parameters: - name: owner in: path description: Owner of the cars required: true schema: format: string type: string examples: laurent_307: value: laurent laurent_jp: value: laurent - name: car in: path description: Owner of the cars required: true schema: format: string type: string examples: laurent_307: value: \u0026#39;307\u0026#39; laurent_jp: value: \u0026#39;jean-pierre\u0026#39; responses: 204: description: No Content x-microcks-refs: - laurent_307 - laurent_jp When Microcks will receive DELETE /owner/laurent/car/307 or DELETE /owner/laurent/car/jean-pierre call, it will just reply using a 204 HTTP response code.\n💡 Note that this association also works if you defined some requestBody examples for the operation.\nOpenAPI extensions Microcks proposes custom OpenAPI extensions to specify mocks organizational or behavioral elements that cannot be deduced directly from OpenAPI document.\nAt the info level of your OpenAPI document, you can add labels specifications that will be used in organizing the Microcks repository. See below illustration and the use of x-microcks extension:\nopenapi: 3.1.0 info: title: OpenAPI Car API description: Sample OpenAPI API using cars contact: name: Laurent Broudoux url: https://github.com/lbroudoux license: name: MIT License url: https://opensource.org/licenses/MIT version: 1.1.0 x-microcks: labels: domain: car status: beta team: Team A [...] At the operation level of your OpenAPI document, we could add delay/frequency and dispatcher specifications. These one will be used to customize the dispatching rules to your API mocks. Let\u0026rsquo;s give an example for OpenAPI using the x-microcks-operation extension:\n[...] post: summary: Add a car to current owner description: Add a car to current owner description operationId: addCarOp x-microcks-operation: delay: 100 dispatcher: SCRIPT dispatcherRules: | def path = mockRequest.getRequest().getRequestURI(); if (!path.contains(\u0026#34;/laurent/car\u0026#34;)) { return \u0026#34;Not Accepted\u0026#34; } def jsonSlurper = new groovy.json.JsonSlurper(); def car = jsonSlurper.parseText(mockRequest.getRequestContent()); if (car.name == null) { return \u0026#34;Not Accepted\u0026#34; } return \u0026#34;Accepted\u0026#34; [...] 💡 Note that we can use multi-line notation in YAML but we will have to escape everything and put \\ before double-quotes and \\n characters if specified using JSON.\nOnce labels and dispatching rules are defined that way, they will overwrite the different customizations you may have done through UI or API during the next import of the OpenAPI document.\nStarting with Microcks 1.11.0, you can also declare mock constraints using the x-microcks-operation extension:\n[...] post: summary: Add a car to current owner description: Add a car to current owner description operationId: addCarOp x-microcks-operation: delay: 100 parameterConstraints: - name: Authorization in: header required: true recopy: false mustMatchRegexp: \u0026#34;^Bearer\\\\s[a-zA-Z0-9\\\\._-]+$\u0026#34; [...] "},{"section":"Documentation","url":"https://microcks.io/documentation/overview/","title":"Overview","description":"Here below all the documentation pages related to **Overview**.","searchKeyword":"","content":"Overview: Define Microcks scope and capabilities Welcome to Microcks Overview! Our Overview section will defined the scope and main concepts of Microcks featires and capabilities.\n💡 Remember Contribute to Microcks Overview\nCode isn\u0026rsquo;t the only way to contribute to OSS; Dev Docs are a huge help that benefit the entire OSS ecosystem. At Microcks, we value Doc contributions as much as every other type of contribution. ❤️\nTo get started as a Docs contributor:\nFamiliarize yourself with our project\u0026rsquo;s Contribution Guide and our Code of Conduct Head over to our Microcks Docs Board Pick an issue you would like to contribute to and leave a comment introducing yourself. This is also the perfect place to leave any questions you may have on how to get started If there is no work done in that Docs issue yet, feel free to open a PR and get started! Docs contributor questions\nDo you have a documentation contributor question and you\u0026rsquo;re wondering how to tag me into a GitHub discussion or PR? Have no fear!\nJoin us on Discord and use the #documentation channel to ping us!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/deployment-topologies/","title":"Deployment topologies","description":"","searchKeyword":"","content":"Introduction We often get the question from people who are adopting Microcks on the deployment toplogy: Where to deploy it and which personas to target? Microcks is modular and flexible, and it runs in many different ways, and having many options can make it unclear to novice users where to begin and how to get started.\nIn this article we share our experience on different tolopologies - or patterns - we\u0026rsquo;ve seen adopted depending on organization maturity and priorities. Even if those patterns are presented in an ordered way, there\u0026rsquo;s no rule of thumb and you may choose to go the other way around if it makes sense.\n💡 There may be some other topologies we have missed here. Please share them with the community if they help you be successful!\n1. Global centralized instance The first deployment topology that people often start with is the one of the Globaly shared, centralized instance. In this topology, Microcks is deployed on a centralized infrastructure and can be accessed by many different teams. It allows discovering and sharing the same API mocks, sourced by one or many Git repositories. It can also be used to run tests on deployed API endpoints.\nIn such a topology, Microcks is always up-and-running and should be dimensioned to host and important number of users and APIs, with secured access, RBAC and segregation features turned on. As datasets and response times are instance-scoped settings, they cannot be customized for different use-cases.\nBenefits ✅ Easy to start with - just one deployment! ✅ Acts immediately as a natural catalog for all teams API ✅ Centralizes both mocks and tests with multi-versions and history\nConcerns 🤔 Security and RBAC configuration 🤔 Needs proper dimensioning 🤔 Too many APIs? Maybe the private ones are not \u0026ldquo;that important\u0026rdquo;\u0026hellip; ❌ Different mock datasets for different use-cases ❌ Different API response times for different use-cases\n2. Local instances As a developer, you may want to use Microcks directly on your laptop during your development iterations and within your unit tests with the help of Testcontainers. Running it directly in your IDE is also possible via DevContainers. This eases the pain in managing dependencies and gives you fast feedback.\nIn such a topology, Microcks instances are considered \u0026ldquo;Ephemeral\u0026rdquo; and thus don\u0026rsquo;t keep history. They can be configured with custom datasets but with the risk of drifting. Frequent synchronization needs to happen to avoid this.\nBenefits ✅ Directly run in IDE or unit tests! ✅ Super fast iterations thanks to Shift-left ✅ Only the API you\u0026rsquo;re working on or the ones you need ✅ Project specific configuration: datasets, response times\nConcerns 🤔 No history! 🤔 How to measure improvements? 🤔 How to be sure non-regression tests are also included? 🤔 Needs frequent sync to avoid drifts ❌ Limited connection to central infrastructure (eg: some message brokers)\n3. Process-scoped instances As an intermediate solution, we see more and more adopters deploying Microcks for scoped use-cased in an \u0026ldquo;Ephemeral\u0026rdquo; way. The goal is to provide a temporary environment with mocked dependencies for: development teams, performance testing campaign, Quality Assurance needs, training, partner onboarding,.. This approach can also be coined Sandbox-as-a-service: a way to provide testing environments on demand. It is typically integrated, orchestrated and controlled by workflows such as long-running CI/CD pipeline or provisioning processes.\nThose instances are considered \u0026ldquo;Ephemeral\u0026rdquo; or temporary, but it could be: minutes, days or even months. They allow fine-grained configuration and customization as they\u0026rsquo;re dedicated to one single use case or project/team. Depending on the use-case, you may pay great attention to management automation and what\u0026rsquo;s where Microcks Kubernetes Operator can make sense in a GitOps approach.\nBenefits ✅ \u0026ldquo;Ephemeral\u0026rdquo;: saves money vs comprehensive environments ✅ Only the API you need (eg. your dependencies) ✅ Project specific configuration: datasets, response times ✅ Project specific access control\nConcerns 🤔 No history! 🤔 No global or consolidated vision 🤔 Automation of the provisioning process\n4. Regional instances Final pattern to take in consideration, is the one of Regional and scoped instances. This one can be used from start in the case of a scoped-test adoption of Microcks: it presents more or less the same characteristices of the Globaly shared, centralized instance but you decide to restrict it to a specific scope in your organization. It could be for a functional domain, for an application, or whatever makes sense in a governance point-of-view. A regional instance will hold all the API mocks and tests - for both public and private APIs - and will be the reference to measure quality, improvements and to source some other catalogs.\nAs this pattern can be used in standalone mode, we think it\u0026rsquo;s best to consider those instances as contributors to a consolidated vision of the available APIs. Hence, you will eventually have to consider some promotion or release process.\nBenefits ✅ All the APIs of the region/division: public \u0026amp; private ✅ All the history on what has changed, what has been tested ✅ Ideal for building a comprehensive catalog of the region ✅ Easy to manage Role based access control and delegation\nConcerns 🤔 Only the APIs of the region: makes global discovery hard ❌ Different mock datasets for different use-cases ❌ Different API response times for different use-cases\nMicrocks at scale Do you have to choose between one and the other topologies? Yes, you definitely have to define priorities to ensure a smooth and incremental adoption. But, ultimately, all of those topologies can play nicely together to handle different situations and stages of your Software Development Life-Cycle.\nWe see users with great maturity confirming How Microcks fit and unify Inner and Outer Loops for cloud-native development. They deploy it using many topologies in order to have the same tool using the same sources-of-truth throughout the whole lifecyle. That\u0026rsquo;s what we call: Microcks at scale! 🚀\nThe schema below represents our vision on how those deployment topologies can be combined to serve the different personas.\nFrom left to right:\nIt all starts with Local Instances integrated into Developers Inner Loop flow. It eases their life in external dependencies management and provides them immediate feedback using contract-testing right in their unit tests, Then, Regional Instances may be fed with the promoted API artifacts coming for design iterations. API artifacts contribute to the comprehensive catalog of this BU/domain/application. API Owners can use those instances to launch contract-tests on deployed API endpoints and track quality metrics and improvements over time, Temporary Process-scoped Instances can be easily provisioned, on-demand, using the regional instances as natural references catalogs. They allow applying different settings (access-control, datasets, response time,\u0026hellip;) depending on the projet or use-case needs. Platform Engineers can fully automate this provisionning, in a reproducible way, saving costs vs maintaining comprehensive environments, Finally, Globaly shared, centralized instance can serves as the consolidated catalog of the public APIs in the organization, offering access to the corresponding mocks to enhance discoverability and tracking of promoted APIs. Enterprise Architects and API consumers will find it useful as the centralized source-of-truth for all the organization APIs. "},{"section":"Documentation","url":"https://microcks.io/documentation/references/container-images/","title":"Container Images","description":"","searchKeyword":"","content":"Introduction Microcks components are distributed as OCI container images that can be executed using container runtimes such as Docker or Podman. All our container images are produced for both linux/amd64 and linux/arm64 architectures.\nThe components container image tags are respecting the following versioning scheme:\nThe x.y.z tag identifies a released and stable version of the image, produced from a GitHub repo tag. This is an immutable tag, The latest tag identifies the latest released and stable version of the image. This is a mutable tag, The nightly tag identifies the latest built - and maybe un-stable - version of the image. This is a mutable tag. Microcks images repositories are primilarly located on Quay.io and synchronized to the Docker Hub.\nContainer images Here is below the list of available container images. For more information on their role in the archutecture, you may check the Architecture \u0026amp; deployment options explanations.\nMicrocks App The Microcks main web application (also called webapp) that holds the UI resources as well as API endpoints. It is produced from https://github.com/microcks/microcks/tree/master/webapp repo folder.\nRepository Pull command Available tags quay.io/microcks/microcks docker pull quay.io/microcks/microcks:latest Quay.io docker.io/microcks/microcks docker pull microcks/microcks:latest Docker.io Microcks Async Minion The Microcks Async Minion (microcks-async-minion) is a component responsible for publishing mock messages corresponding to AsyncAPI definitions. It is produced from https://github.com/microcks/microcks/tree/master/minions/async repo folder.\nRepository Pull command Available tags quay.io/microcks/microcks-async-minion docker pull quay.io/microcks/microcks-async-minion:latest Quay.io docker.io/microcks/microcks-async-minion docker pull microcks/microcks-async-minion:latest Docker.io Microcks Postman runtime The Microcks Postman runtime (microcks-postman-runtime) allows the execution of Postman Collection tests. It is produced from the https://github.com/microcks/microcks-postman-runtime repository.\nRepository Pull command Available tags quay.io/microcks/microcks-postman-runtime docker pull quay.io/microcks/microcks-postman-runtime:latest Quay.io docker.io/microcks/microcks-postman-runtime docker pull microcks/microcks-postman-runtime:latest Docker.io Microcks Uber The Uber distribution is designed to support Inner Loop integration or Shift-Left scenarios to embed Microcks in your development workflow, on a laptop, within your unit tests easy. It is produced from https://github.com/microcks/microcks/tree/master/distro/uber repo folder.\nThe Uber distribution provide additional tags with -native suffix (xyz-native, latest-native and nightly-native) that allows pulling a GraalVM native packageg image with reduced image size and faster bootstrap time. However, some dynamic features like SCRIPT dispatcher are not available in this native flavour.\nRepository Pull command Available tags quay.io/microcks/microcks-uber docker pull quay.io/microcks/microcks-uber:latest docker pull quay.io/microcks/microcks-uber:latest-native Quay.io docker.io/microcks/microcks-uber docker pull microcks/microcks-uber:latest docker pull microcks/microcks-uber:latest-native Docker.io Microcks Uber Async Minion The Microcks Uber Async Minion (microcks-uber-async-minion) is responsible for publishing mock messages corresponding to AsyncAPI definitions with Uber distribution. It is produced from https://github.com/microcks/microcks/tree/master/distro/uber-async repo folder.\nRepository Pull command Available tags quay.io/microcks/microcks-uber-async-minion docker pull quay.io/microcks/microcks-uber-async-minion:latest Quay.io docker.io/microcks/microcks-uber-async-minion docker pull microcks/microcks-uber-async-minion:latest Docker.io Microcks Operator This container image is a Kubernetes Operator for installing and managing Microcks using Custom Resources. It is produced from the https://github.com/microcks/microcks-operator repository.\nRepository Pull command Available tags quay.io/microcks/microcks-operator docker pull quay.io/microcks/microcks-operator:latest Quay.io docker.io/microcks/microcks-operator docker pull microcks/microcks-operator:latest Docker.io Microcks CLI This container image is a CLI used for interacting with a Microcks instance. It is produced from the https://github.com/microcks/microcks-cli repository.\nRepository Pull command Available tags quay.io/microcks/microcks-cli docker pull quay.io/microcks/microcks-cli:latest Quay.io docker.io/microcks/microcks-cli docker pull microcks/microcks-operator:latest Docker.io Software Supply Chain Security Software supply chain security combines best practices from risk management and cybersecurity to help protect the software supply chain from potential vulnerabilities. We aim to provide the most comprehensive information about the software, the people who wrote them, and the sources they come from, like registries, GitHub repositories, codebases, or other open source projects. It also includes any vulnerabilities that may negatively impact software security – and that’s where software supply chain security comes in.\nVulnerabilities All our container images are scanned for vulnerabilities with both Clair on Quay.io and Docker Scout on Docker Hub. Scanning reports are available for each image on every repository.\nThe container images base layers as well as the Microcks application dependencies are regularly updated as per the SECURITY-INSIGHTS.yml and DEPENDENCY_POLICY.md file you may find in each GitHub source repository.\nSignatures All our images are signed with Cosign and using the Sigstore framework. The signing is actually done from withing our GitHub Actions process using the GitHub OIDC token associated with the Actions process.\nTo verify the signature of a Microcks container image you just pulled, you first have to check the name of the Actions process - usually build-verify.yml - and the tag or branch from where it has been produced.\nFor example: you can verify the signature of the microcks:nightly image, built from the 1.11.x branch, using those commands:\nIMAGE_WITH_DIGEST=`docker inspect --format=\u0026#39;{{index .RepoDigests 0}}\u0026#39; quay.io/microcks/microcks:nightly` cosign verify $IMAGE_WITH_DIGEST --certificate-identity https://github.com/microcks/microcks/.github/workflows/build-verify.yml@refs/heads/1.11.x --certificate-oidc-issuer https://token.actions.githubusercontent.com | jq . that may produced following output:\n// Verification for quay.io/microcks/microcks@sha256:7241f2c0bbd9f5ba72c2bc908e9ee035db40c4fcff61d7d75788ddb8df139e2c -- // The following checks were performed on each of these signatures: // - The cosign claims were validated // - Existence of the claims in the transparency log was verified offline // - The code-signing certificate was verified using trusted certificate authority certificates [ { \u0026#34;critical\u0026#34;: { \u0026#34;identity\u0026#34;: { \u0026#34;docker-reference\u0026#34;: \u0026#34;quay.io/microcks/microcks\u0026#34; }, \u0026#34;image\u0026#34;: { \u0026#34;docker-manifest-digest\u0026#34;: \u0026#34;sha256:7241f2c0bbd9f5ba72c2bc908e9ee035db40c4fcff61d7d75788ddb8df139e2c\u0026#34; }, \u0026#34;type\u0026#34;: \u0026#34;cosign container image signature\u0026#34; }, \u0026#34;optional\u0026#34;: { \u0026#34;1.3.6.1.4.1.57264.1.1\u0026#34;: \u0026#34;https://token.actions.githubusercontent.com\u0026#34;, \u0026#34;1.3.6.1.4.1.57264.1.2\u0026#34;: \u0026#34;push\u0026#34;, \u0026#34;1.3.6.1.4.1.57264.1.3\u0026#34;: \u0026#34;edbe55f846f554d500ac3dc33c8346195e70f2ac\u0026#34;, \u0026#34;1.3.6.1.4.1.57264.1.4\u0026#34;: \u0026#34;build-verify-package\u0026#34;, \u0026#34;1.3.6.1.4.1.57264.1.5\u0026#34;: \u0026#34;microcks/microcks\u0026#34;, \u0026#34;1.3.6.1.4.1.57264.1.6\u0026#34;: \u0026#34;refs/heads/1.11.x\u0026#34;, \u0026#34;Bundle\u0026#34;: { \u0026#34;SignedEntryTimestamp\u0026#34;: \u0026#34;MEQCIGOggaElAVzClnzPfl1gs3+ZgBwl22XC51YhbTdqu+f8AiAQ3Nfk/GXwIe2X7KSVwFubiuJfdVyPeZQQN0mhnHVkpA==\u0026#34;, \u0026#34;Payload\u0026#34;: { \u0026#34;body\u0026#34;: \u0026#34;ey---REDACTED---0=\u0026#34;, \u0026#34;integratedTime\u0026#34;: 1733173324, \u0026#34;logIndex\u0026#34;: 152950063, \u0026#34;logID\u0026#34;: \u0026#34;c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d\u0026#34; } }, \u0026#34;Issuer\u0026#34;: \u0026#34;https://token.actions.githubusercontent.com\u0026#34;, \u0026#34;Subject\u0026#34;: \u0026#34;https://github.com/microcks/microcks/.github/workflows/build-verify.yml@refs/heads/1.11.x\u0026#34;, \u0026#34;githubWorkflowName\u0026#34;: \u0026#34;build-verify-package\u0026#34;, \u0026#34;githubWorkflowRef\u0026#34;: \u0026#34;refs/heads/1.11.x\u0026#34;, \u0026#34;githubWorkflowRepository\u0026#34;: \u0026#34;microcks/microcks\u0026#34;, \u0026#34;githubWorkflowSha\u0026#34;: \u0026#34;edbe55f846f554d500ac3dc33c8346195e70f2ac\u0026#34;, \u0026#34;githubWorkflowTrigger\u0026#34;: \u0026#34;push\u0026#34; } } ] You can then extract the logIndex and connect to Rekor to get some details on it. Here: https://search.sigstore.dev/?logIndex=152950063\nProvenance All our images are built with a SLSA Provenance attestation (currently in v0.2). This attestation is attached as a layer of a metadata manifest of the main image index.\nYou can quickly inspect the Provenance attestations value using the imagestools inspect tool from dockerlike this:\ndocker buildx imagetools inspect quay.io/microcks/microcks:nightly --format \u0026#34;{{ json .Provenance }}\u0026#34; If you need to get access to the raw in-toto predicates, you can use a tool like ORAS utility.\nAs Microcks images are provided for linux/amd64 and linux/arm64 architectures, the 2 first manifests of an image index are reserved for these architectures. Then, starting at index 2 come the metadata manifests from where you can extract in-toto attestations. For example: you can extract the Provenance of the microcks:nightly image using those commands:\nPROVENANCE_DIGEST=`docker manifest inspect --verbose quay.io/microcks/microcks:nightly | jq -r \u0026#39;.[2].OCIManifest.layers | map(select(.annotations.\u0026#34;in-toto.io/predicate-type\u0026#34; == \u0026#34;https://slsa.dev/provenance/v0.2\u0026#34;) | .digest)[0]\u0026#39;` oras blob fetch --output - quay.io/microcks/microcks:nightly@$PROVENANCE_DIGEST | jq . that may produced following output:\n{ \u0026#34;_type\u0026#34;: \u0026#34;https://in-toto.io/Statement/v0.1\u0026#34;, \u0026#34;predicateType\u0026#34;: \u0026#34;https://slsa.dev/provenance/v0.2\u0026#34;, \u0026#34;subject\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;pkg:docker/quay.io/microcks/microcks@nightly?platform=linux%2Famd64\u0026#34;, \u0026#34;digest\u0026#34;: { \u0026#34;sha256\u0026#34;: \u0026#34;109c1a70123a64c824b32dfebaf0934b6d40db127af409ed07ae303966f1b412\u0026#34; } }, { \u0026#34;name\u0026#34;: \u0026#34;pkg:docker/microcks/microcks@nightly?platform=linux%2Famd64\u0026#34;, \u0026#34;digest\u0026#34;: { \u0026#34;sha256\u0026#34;: \u0026#34;109c1a70123a64c824b32dfebaf0934b6d40db127af409ed07ae303966f1b412\u0026#34; } } ], \u0026#34;predicate\u0026#34;: { \u0026#34;builder\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;buildType\u0026#34;: \u0026#34;https://mobyproject.org/buildkit@v1\u0026#34;, \u0026#34;materials\u0026#34;: [ { \u0026#34;uri\u0026#34;: \u0026#34;pkg:docker/registry.access.redhat.com/ubi9/[email protected]?platform=linux%2Famd64\u0026#34;, \u0026#34;digest\u0026#34;: { \u0026#34;sha256\u0026#34;: \u0026#34;1b6d711648229a1c987f39cfdfccaebe2bd92d0b5d8caa5dbaa5234a9278a0b2\u0026#34; } } ], \u0026#34;invocation\u0026#34;: { \u0026#34;configSource\u0026#34;: { \u0026#34;entryPoint\u0026#34;: \u0026#34;Dockerfile\u0026#34; }, \u0026#34;parameters\u0026#34;: { \u0026#34;frontend\u0026#34;: \u0026#34;dockerfile.v0\u0026#34;, \u0026#34;locals\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;context\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;dockerfile\u0026#34; } ] }, \u0026#34;environment\u0026#34;: { \u0026#34;platform\u0026#34;: \u0026#34;linux/amd64\u0026#34; } }, \u0026#34;metadata\u0026#34;: { \u0026#34;buildInvocationID\u0026#34;: \u0026#34;hpnt5a6sztl5mxvhlp1n8hd1v\u0026#34;, \u0026#34;buildStartedOn\u0026#34;: \u0026#34;2024-12-03T13:14:59.450237594Z\u0026#34;, \u0026#34;buildFinishedOn\u0026#34;: \u0026#34;2024-12-03T13:15:52.403132277Z\u0026#34;, \u0026#34;completeness\u0026#34;: { \u0026#34;parameters\u0026#34;: false, \u0026#34;environment\u0026#34;: true, \u0026#34;materials\u0026#34;: false }, \u0026#34;reproducible\u0026#34;: false, \u0026#34;https://mobyproject.org/buildkit@v1#metadata\u0026#34;: { \u0026#34;vcs\u0026#34;: { \u0026#34;localdir:context\u0026#34;: \u0026#34;webapp\u0026#34;, \u0026#34;localdir:dockerfile\u0026#34;: \u0026#34;webapp/src/main/docker\u0026#34;, \u0026#34;revision\u0026#34;: \u0026#34;f3cfa7c2c741e6023bd2bef77a5b87278f01d540\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;https://github.com/microcks/microcks\u0026#34; } } } } } You can find in the attestation the GitHub source and revision, the base image used (ubi9/[email protected]) as well as the build metadata.\nSBOM - Softwate Bill Of Materials All our images are built with a SPDX SBOM attestation (currently in v2.3). This attestation is attached as a layer of a metadata manifest of the main image index.\nYou can quickly inspect the Provenance attestations value using the imagestools inspect tool from dockerlike this:\ndocker buildx imagetools inspect quay.io/microcks/microcks-postman-runtime:nightly --format \u0026#34;{{ json .SBOM }}\u0026#34; If you need to get access to the raw in-toto predicates, you can use a tool like ORAS utility.\nAs Microcks images are provided for linux/amd64 and linux/arm64 architectures, the 2 first manifests of an image index are reserved for these architectures. Then, starting at index 2 come the metadata manifests from where you can extract in-toto attestations. For example: you can extract the SBOM of the microcks-postman-runtime:nightly image using those commands:\nSBOM_DIGEST=`docker manifest inspect --verbose quay.io/microcks/microcks-postman-runtime:nightly | jq -r \u0026#39;.[2].OCIManifest.layers | map(select(.annotations.\u0026#34;in-toto.io/predicate-type\u0026#34; == \u0026#34;https://spdx.dev/Document\u0026#34;) | .digest)[0]\u0026#39;` oras blob fetch --output - quay.io/microcks/microcks-postman-runtime:nightly@$SBOM_DIGEST | jq . { \u0026#34;_type\u0026#34;: \u0026#34;https://in-toto.io/Statement/v0.1\u0026#34;, \u0026#34;predicateType\u0026#34;: \u0026#34;https://spdx.dev/Document\u0026#34;, \u0026#34;subject\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;pkg:docker/quay.io/microcks/microcks-postman-runtime@nightly?platform=linux%2Famd64\u0026#34;, \u0026#34;digest\u0026#34;: { \u0026#34;sha256\u0026#34;: \u0026#34;11c951599ed1bf649abbc2b23ae2730a4e1ef6ad9537a7f10df39b6546bf8429\u0026#34; } } ], \u0026#34;predicate\u0026#34;: { \u0026#34;spdxVersion\u0026#34;: \u0026#34;SPDX-2.3\u0026#34;, \u0026#34;dataLicense\u0026#34;: \u0026#34;CC0-1.0\u0026#34;, \u0026#34;SPDXID\u0026#34;: \u0026#34;SPDXRef-DOCUMENT\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;sbom\u0026#34;, \u0026#34;documentNamespace\u0026#34;: \u0026#34;https://anchore.com/syft/dir/sbom-250326c0-1ac9-45df-b956-7034af7e03f0\u0026#34;, \u0026#34;creationInfo\u0026#34;: { \u0026#34;licenseListVersion\u0026#34;: \u0026#34;3.23\u0026#34;, \u0026#34;creators\u0026#34;: [ \u0026#34;Organization: Anchore, Inc\u0026#34;, \u0026#34;Tool: syft-v0.105.0\u0026#34;, \u0026#34;Tool: buildkit-v0.17.2\u0026#34; ], \u0026#34;created\u0026#34;: \u0026#34;2024-12-03T13:19:29Z\u0026#34; }, \u0026#34;packages\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;@colors/colors\u0026#34;, \u0026#34;SPDXID\u0026#34;: \u0026#34;SPDXRef-Package-npm--colors-colors-0d3fee5f6cc0bed6\u0026#34;, \u0026#34;versionInfo\u0026#34;: \u0026#34;1.6.0\u0026#34;, \u0026#34;supplier\u0026#34;: \u0026#34;Person: DABH\u0026#34;, \u0026#34;originator\u0026#34;: \u0026#34;Person: DABH\u0026#34;, \u0026#34;downloadLocation\u0026#34;: \u0026#34;http://github.com/DABH/colors.js.git\u0026#34;, \u0026#34;filesAnalyzed\u0026#34;: false, \u0026#34;homepage\u0026#34;: \u0026#34;https://github.com/DABH/colors.js\u0026#34;, \u0026#34;sourceInfo\u0026#34;: \u0026#34;acquired package info from installed node module manifest file: /app/node_modules/@colors/colors/package.json\u0026#34;, \u0026#34;licenseConcluded\u0026#34;: \u0026#34;NOASSERTION\u0026#34;, \u0026#34;licenseDeclared\u0026#34;: \u0026#34;MIT\u0026#34;, \u0026#34;copyrightText\u0026#34;: \u0026#34;NOASSERTION\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;get colors in your node.js console\u0026#34;, \u0026#34;externalRefs\u0026#34;: [ // [...] ] }, // [...] ] } } You can find in the attestation all the packages directly or transitively included into the container image.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/docker-desktop-extension/","title":"As a Docker Desktop Extension","description":"","searchKeyword":"","content":"This guide shows you how to install Microcks as a Docker Desktop Extension on your local machine. This way of installing Microcks is very convenient for people wanted to start quickly with most common Microcks capabilities and without hitting the terminal 👻\nDocker Desktop is a simple-to-install application for Mac, Windows, or Linux that allows you to create and share containerized applications and microservices. Docker Desktop includes the Docker Engine, the Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and the Credential Helper.\nInstallation To get started, make sure you have Docker Desktop installed on your system. Then, once it is running\nSelect Add Extensions and type microcks in the search box,\nChoose Microcks, install and launch it and you are ready to go 🤩\nThe video just below illustrates the installation process as well as the creation of a first Direct API. It has never been simpler to set up and use Microcks on a laptop. 🙌\nSettings The settings panel allows you to configure some options like whether you\u0026rsquo;d like to enable the Asynchronous APIs features (default is disabled) and if you\u0026rsquo;d need to set an offset to ports that are used to access the services.\nPretty straight forward!\nWrap-up You just installed Microcks in a graphical way on your local machine. Congrats! 🎉\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/api/","title":"Connecting to Microcks API","description":"","searchKeyword":"","content":"Overview This guide shows you how to authenticate to and how to use the Microcks API for better automation of tasks. As all the features available in Microcks can be used directly through its REST API, you can extend it we way you want and use it in a pure headless mode.\nThis guide takes place in 3 steps:\n1️⃣ We will check your security configuration and see if authentication is required (it depends on how you deployed Microcks),\n2️⃣ If your Microcks is secured, we will authenticate and retrieve a token to later use for authorizing API calls,\n3️⃣ We will issue a bunch of API calls and discuss permissions.\n💡 All the commands of this guide are exposed as curl commands, it\u0026rsquo;s then up-to-you to translate them into your language or automation stack of choice. As this is a simple test, we will not bother with certificates validation and add the -k flags to the commands. Be sure to use --cacert or --capath options on real environment with custom certificates.\nLet\u0026rsquo;s jump in! 🏂\n1. Check security configuration Assuming you\u0026rsquo;re running your Microcks instance at https://microcks.example.com/api/keycloak/config and that you\u0026rsquo;re not aware of your security configuration, you may execute this first command in your terminal to get the configuration:\ncurl https://microcks.example.com/api/keycloak/config -k { \u0026#34;enabled\u0026#34;: true, \u0026#34;realm\u0026#34;: \u0026#34;microcks\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;microcks-app-js\u0026#34;, \u0026#34;auth-server-url\u0026#34;: \u0026#34;https://keycloak.microcks.example.com\u0026#34;, \u0026#34;ssl-required\u0026#34;: \u0026#34;external\u0026#34;, \u0026#34;public-client\u0026#34;: true } On the above command output, you see that Keycloak and thus authentication are actually enabled. We will use the auth-server-url and realm for authentication. If it\u0026rsquo;s not the case, then you can skip the end of this step as well as step 2.\nBefore going further, you need to retrieve a Service Account for authenticating to Keycloak. Your Microcks provider or adminsitrator has probably read the explanations on Service Account and will be able to provide this information.\nFor new comers, don\u0026rsquo;t worry! Microcks comes with a default account named microcks-serviceaccount that comes with default installation; with a default credential that is set to ab54d329-e435-41ae-a900-ec6b3fe15c54. 😉\n2. Authenticate to Keycloak Your Microcks installation is secured, you have your Service Account information at hand and you know need to authenticate and retrieve a token.\nThe authentication of Service Account implements the simple OAuth 2.0 Client Credentials Grant so that its convenient for machine-to-machine interaction scenarios. This grant requires that our service account name and credentials being first encode in Base64:\n# encode account:credentials as base 64 $ echo \u0026#34;microcks-serviceaccount:ab54d329-e435-41ae-a900-ec6b3fe15c54\u0026#34; | base64 bWljcm9ja3Mtc2VydmljZWFjY291bnQ6YWI1NGQzMjktZTQzNS00MWFlLWE5MDAtZWM2YjNmZTE1YzU0Cg= Then you can issue a POST comamnd to the auth-server-url and realm previously retrieved, reusing this Base64 string in a basic authorization header and specifying the client credentials grant type:\n# authenticate and retrieve an access_token from Keycloak curl -X POST https://keycloak.microcks.example.com/realms/microcks/protocol/openid-connect/token \\ -H \u0026#39;Content-Type: application/x-www-form-urlencoded\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; \\ -H \u0026#39;Authorization: Basic bWljcm9ja3Mtc2VydmljZWFjY291bnQ6YWI1NGQzMjktZTQzNS00MWFlLWE5MDAtZWM2YjNmZTE1YzU0Cg=\u0026#39; \\ -d \u0026#39;grant_type=client_credentials\u0026#39; -k { \u0026#34;access_token\u0026#34;: \u0026#34;eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnTVY5OUNfdHRCcDNnSy0tUklaYkY5TDJUWkdpTWZUSWQwaXNrXzh4TElZIn0.eyJleHAiOjE3MTcwNzA0MTQsImlhdCI6MTcxNzA3MDExNCwianRpIjoiM2YyYWZkMjgtMzQ3Ny00NjJiLWIzYmEtNDljZTE3NGQwMTViIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MTgwL3JlYWxtcy9taWNyb2NrcyIsImF1ZCI6WyJtaWNyb2Nrcy1hcHAiLCJhY2NvdW50Il0sInN1YiI6IjY5OGZhMzM5LTk5NjEtNDA0ZC1iMjUwLTRhMzQ5MzY2ZDQ2ZCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1taWNyb2NrcyJdfSwicmVzb3VyY2VfYWNjZXNzIjp7Im1pY3JvY2tzLWFwcCI6eyJyb2xlcyI6WyJ1c2VyIl19LCJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJjbGllbnRIb3N0IjoiMTcyLjE3LjAuMSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoic2VydmljZS1hY2NvdW50LW1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiY2xpZW50QWRkcmVzcyI6IjE3Mi4xNy4wLjEiLCJjbGllbnRfaWQiOiJtaWNyb2Nrcy1zZXJ2aWNlYWNjb3VudCJ9.FgWaKrZthEEK4pAyd9n8mMxCfErCzXN8l8QUaAI9-VYfwfy1qXAqpqtL8rTtOf4MiAV0P7ntz1firmv6GfaInHD9FMbysXOtp6RVB3Jj0ITNqsR-Guw6lYZIKg5ECtqLw3x5cISaq00VGTIOpZDGVn8GRM-a6XQHvfRJzPqgZXELWIhxCzmBor2Sv8m35E_jNQT-cMNrd7XPdRfFYcYqxQgOmez1N9uHg0kajWJEHKFu1TFaa1HT2vaFB6QgNnJusiEIVEltK7FG42SC1QXH9LmUJC9FK7jRTqJx43VMLOCT4xnwsimVq6vlYr_TCsrCB7HqSZUQqeer9cddRnsfag\u0026#34;, \u0026#34;expires_in\u0026#34;: 300, \u0026#34;refresh_expires_in\u0026#34;: 0, \u0026#34;token_type\u0026#34;: \u0026#34;Bearer\u0026#34;, \u0026#34;not-before-policy\u0026#34;: 0, \u0026#34;scope\u0026#34;: \u0026#34;email profile\u0026#34; } The important things here is the access_token property of the authentication response that you need to extract and keep at hand.\n3. Connect to Microcks API If you retrieved an access_token in the previous step, you can store into a TOKEN environment variable like this:\nexport TOKEN=eyJhbGciOiJSUzI1NiIsIn... If you skipped the step 2 because you\u0026rsquo;re using an unauthenticated instance of Microcks then you can set TOKEN to any value you want like below.\nexport TOKEN=foobar Now that the TOKEN is set you can issue commands to Microcks API, providing it as the Authorization header value.\nFor example, you can check the content of your API | Services repository like this:\ncurl https://microcks.example.com/api/services/map -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -k { \u0026#34;REST\u0026#34;: 23, \u0026#34;GENERIC_REST\u0026#34;: 1, \u0026#34;GRAPHQL\u0026#34;: 3, \u0026#34;EVENT\u0026#34;: 13, \u0026#34;SOAP_HTTP\u0026#34;: 2, \u0026#34;GRPC\u0026#34;: 3 } You can also access the list of API | Services, requesting for first item like this:\ncurl \u0026#39;https://microcks.example.com/api/services?page=0\u0026amp;size=1\u0026#39; -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -k [ { \u0026#34;id\u0026#34;: \u0026#34;65fc52b9512f6013cb7e9781\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;API Pastry - 2.0\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;REST\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;createdOn\u0026#34;: 1711035065536, \u0026#34;lastUpdate\u0026#34;: 1714377633653, \u0026#34;labels\u0026#34;: { \u0026#34;domain\u0026#34;: \u0026#34;pastry\u0026#34; } }, \u0026#34;sourceArtifact\u0026#34;: \u0026#34;https://raw.githubusercontent.com/microcks/microcks/master/samples/APIPastry-openapi.yaml\u0026#34;, \u0026#34;operations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;GET /pastry\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry\u0026#34; ] }, { \u0026#34;name\u0026#34;: \u0026#34;GET /pastry/{name}\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;defaultDelay\u0026#34;: 0, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry/Eclair%20Cafe\u0026#34;, \u0026#34;/pastry/Millefeuille\u0026#34; ], \u0026#34;parameterConstraints\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;TraceID\u0026#34;, \u0026#34;in\u0026#34;: \u0026#34;header\u0026#34;, \u0026#34;required\u0026#34;: false, \u0026#34;recopy\u0026#34;: true } ] }, { \u0026#34;name\u0026#34;: \u0026#34;PATCH /pastry/{name}\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;PATCH\u0026#34;, \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry/Eclair%20Cafe\u0026#34; ] } ] } ] And you can also get access to the details of this specific API, reusing its id with following API call:\ncurl \u0026#39;https://microcks.example.com/api/services/65fc52b9512f6013cb7e9781?messages=true\u0026#39; -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -k Imagine you followed the Importing Services \u0026amp; APIs guide previously and you have created a scheduled importer, then you can access the list of importer jobs:\ncurl \u0026#39;https://microcks.example.com/api/jobs?page=0\u0026amp;size=1\u0026#39; -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -k [ { \u0026#34;id\u0026#34;:\u0026#34;6470b31415d8e3652a787bad\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;API Pastries Collection\u0026#34;, \u0026#34;repositoryUrl\u0026#34;:\u0026#34;https://raw.githubusercontent.com/microcks/api-lifecycle/master/contract-testing-demo/apipastries-postman-collection.json\u0026#34;, \u0026#34;mainArtifact\u0026#34;:false, \u0026#34;repositoryDisableSSLValidation\u0026#34;:false, \u0026#34;createdDate\u0026#34;:1685107476336, \u0026#34;lastImportDate\u0026#34;:1695721275198, \u0026#34;active\u0026#34;:false, \u0026#34;etag\u0026#34;:\u0026#34;\\\u0026#34;28fddf9e35d01cb283c334440a461e4054c6f27f993962c6b27759d5db3a11ee\\\u0026#34;\u0026#34;, \u0026#34;metadata\u0026#34;:{ \u0026#34;createdOn\u0026#34;:1685107476336, \u0026#34;lastUpdate\u0026#34;:1695721281109, \u0026#34;labels\u0026#34;:{} }, \u0026#34;serviceRefs\u0026#34;:[ {\u0026#34;serviceId\u0026#34;:\u0026#34;65031293f2de8546d2ddc07e\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;API Pastries\u0026#34;,\u0026#34;version\u0026#34;:\u0026#34;0.0.1\u0026#34;} ] } ] However, some of the API calls are restrictied to certain permissions. For example, if you try to activate the above importer job using the following API call:\ncurl \u0026#39;https://microcks.example.com/api/jobs/6470b31415d8e3652a787bad/start\u0026#39; -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -k -v You\u0026rsquo;ll get the following error response:\n\u0026lt; HTTP/1.1 403 \u0026lt; WWW-Authenticate: Bearer error=\u0026#34;insufficient_scope\u0026#34;, error_description=\u0026#34;The request requires higher privileges than provided by the access token.\u0026#34;, error_uri=\u0026#34;https://tools.ietf.org/html/rfc6750#section-3.1\u0026#34; This is expected as Service Account are endorsing roles. By default the microcks-serviceaccount only endorse the user role and cannot perform advanced operations like creating or activating importer jobs.\nWrap-up Walking this guide, you have learned how to connect to the Microcks API, going through authentication first if your installation has enabled it. Microcks proposes to authenticate via Service Account and using OAuth 2.0 Client Credentials Grant to retrieve a valid token. This authentication mechanism is the foundation that is used with all other means to interact with Microcks\u0026rsquo; API: the CLI, the GitHub Actions, the Jenkins plugin, etc.\nYou may follow-up this guide with consulting the reference about Microcks\u0026rsquo; REST API or learning more about Service Accounts.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/integration/ai-copilot/","title":"Enabling the AI Copilot","description":"","searchKeyword":"","content":" 🪄 To Be Created\nThis is a new documentation page that has to be written as part of our Refactoring Effort.\nGoal of this page\n\u0026hellip; "},{"section":"Documentation","url":"https://microcks.io/documentation/references/configuration/application-config/","title":"Application Configuration","description":"","searchKeyword":"","content":"Overview This page aims to give you a comprehensive reference on the configuration properties used within Microcks. These informations are the ideal companion of the Architecture \u0026amp; Deployment Options explanations and will be crucial for people who:\nWant to install Microcks - providing info on what can be configured and what are the defaut values, Want to customize configurations - providing info on what can be used to provide customized Docker-Compose files, Want to develop or extend Microcks - providing comprehensive info on what\u0026rsquo;s externalized as properties and guidelines on how to extend. Before starting, it\u0026rsquo;s important to understand how configuration files are actually organized and served to different components. As Microcks is delivered via container images, the configuration is externalized into .properties files that should be mounted into containers on the /deployments/config mouting path.\nThe way these configuration properties are supplied is different depending how you use Microcks:\nWhen ran via Docker Compose, Podman Compose or via the Docker Desktop Extension, the properties files are difrectly managed as files on local filesystem When ran on Kubernetes and installed via Helm CHart or Operator, the properties files are suuplied to the components using ConfigMap resources. When ran through our Testcontainers module, you just setup environment variables that will be used as values when laoding the configuration properties. It\u0026rsquo;s important to note that depending on the method you use for installation, the configuration properties may have different names. However, we\u0026rsquo;re just following installation method idioms and conventions so that matching should be straightforward. For example, a configuration property named features.feature.repository-filter.label-key=value in a raw properties file will be matched with the following YAML equivalent when configuring via Helm values.yaml or Operator Resource:\nfeatures: feature: repositoryFilter: labelKey: value 🚨 In this page, we use the raw properties notation that can be used easily on your local machine when testing via Docler Compose. Be sure to check the Helm Chart or Operator reference documentations to get the equivalent.\nWebapp component config This section details the configuration properties used by the main Webapp component of Microcks.\napplication.properties application.properties is the main configuration file where core features are configured.\nNetwork \u0026amp; management The Webapp component restricts the size of uploaded files to 2MB by default. It also configures a bunch of management features and endpoints at startup:\n# Application configuration properties spring.servlet.multipart.max-file-size=${MAX_UPLOAD_FILE_SIZE:2MB} spring.jackson.serialization.write-dates-as-timestamps=true spring.jackson.default-property-inclusion=non_null server.forward-headers-strategy=NATIVE management.endpoints.enabled-by-default=false management.endpoints.jmx.exposure.exclude=* management.endpoints.web.exposure.include=* management.endpoint.metrics.enabled=true management.endpoint.prometheus.enabled=true management.metrics.export.prometheus.enabled=true management.metrics.distribution.percentiles-histogram.http.server.requests=true management.metrics.distribution.slo.http.server.requests=1ms, 5ms, 10ms, 25ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 2500ms, 5000ms, 10000ms Components connection The Webapp component should know how to connect to external component and the callback those one should use:\ntests-callback.url=${TEST_CALLBACK_URL:http://localhost:8080} postman-runner.url=${POSTMAN_RUNNER_URL:http://localhost:3000} async-minion.url=${ASYNC_MINION_URL:http://localhost:8081} default-artifacts-repository.url=${DEFAULT_ARTIFACTS_REPOSITORY_URL:#{null}} validation.resourceUrl=http://localhost:8080/api/resources/ Scheduled imports The interval at which Import Jobs are scheduled can be configured using a CRON expression. Default is every 2 hours:\nservices.update.interval=${SERVICES_UPDATE_INTERVAL:0 0 0/2 * * *} Async API support The Webapp component can be configured to support AsyncAPI and use Kafka to publish chnage events.\n# Async mocking support. async-api.enabled=false async-api.default-binding=KAFKA async-api.default-frequency=3 # Kafka configuration properties spring.kafka.producer.bootstrap-servers=${KAFKA_BOOTSTRAP_SERVER:localhost:9092} spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=io.github.microcks.event.ServiceViewChangeEventSerializer Conformance metrics computing Those properties defines how conformance result trend is computed:\n# Test conformance computation config test-conformance.trend-size=3 test-conformance.trend-history-size=10 AI Copilot The Webapp component may use a generative AI LLM for its AI Copilot features:\n# AI Copilot configuration properties ai-copilot.enabled=false ai-copilot.implementation=openai ai-copilot.openai.api-key=sk-my-openai-api-key #ai-copilot.openai.api-url=http://localhost:1234/ ai-copilot.openai.timeout=30 ai-copilot.openai.maxTokens=3000 #ai-copilot.openai.model=gpt-4-turbo-preview Security settings All the security related settings (network, identity provider connections, CORS support, etc\u0026hellip;) can be found in Security Configuration reference.\nfeatures.properties The features.properties holds configuration properties that should be distributed to the UI or external components discovering Microcks capabilities.\nHub Access Integration of Microcks Hub as a marketplace to retrieve API \u0026amp; Services mocks is enabled by default:\nfeatures.feature.microcks-hub.enabled=true features.feature.microcks-hub.endpoint=https://hub.microcks.io/api features.feature.microcks-hub.allowed-roles=admin,manager,manager-any 🗒️ The manager-any is not actually a role, it\u0026rsquo;s a notation meaning \u0026ldquo;A user that belong to any management group even if not endorsing the global manager role\u0026rdquo;.\nAsync API support Support for AsyncAPI is an optional feature that is disabled by default. Endpoints informations may be provided for each supported binding:\nfeatures.feature.async-api.enabled=false features.feature.async-api.frequencies=3,10,30 features.feature.async-api.default-binding=KAFKA features.feature.async-api.endpoint-KAFKA=my-cluster-kafka-bootstrap.apps.try.microcks.io features.feature.async-api.endpoint-MQTT=my-mqtt-broker.apps.try.microcks.io #features.feature.async-api.endpoint-\u0026lt;BINDING\u0026gt;=endpoint-information Repository filtering Repository filtering allows using labels for segregating the API \u0026amp; Service repository. See this section in Organizing Repository guide. It is enabled by default with those values:\nfeatures.feature.repository-filter.enabled=false features.feature.repository-filter.label-key=domain features.feature.repository-filter.label-label=Domain features.feature.repository-filter.label-list=domain,status Repository segmentation Repository tenancy allows using labels for segmenting the API \u0026amp; Service management permsissions. See this section in Organizing Repository guide. It is disabled by default with those values:\nfeatures.feature.repository-tenancy.enabled=false features.feature.repository-tenancy.artifact-import-allowed-roles=admin,manager,manager-any 🗒️ The manager-any is not actually a role, it\u0026rsquo;s a notation meaning \u0026ldquo;A user that belong to any management group even if not endorsing the global manager role\u0026rdquo;.\nAsync Minion component config This section details the configuration properties used by the optional Async Minion component of Microcks.\napplication.properties application.properties is the only configuration file used.\n💡 When launched using Docker Compose, the Async Minion is run with a profile named docker-compose. Each property below should then be prefixed with %docker-compose.\nSo -for example- if you want to change the Http port to 8082, you\u0026rsquo;ll actually need to setup %docker-compose.quarkus.http.port=8082.\nBehavior The Async Minion behavior can be configured in terms of supported protocols (minion.supported-bindings), restricted message-producing frequencies (minion.restricted-frequencies is a coma-separated list of delays in seconds between 2 publications) and default Avro encoding (see Kafka, Avro \u0026amp; Schema Registry)\n# Configure the minion own behavioral properties. minion.supported-bindings=KAFKA,WS minion.restricted-frequencies=3,10,30 minion.default-avro-encoding=RAW Network \u0026amp; management The Async Minion uses a non-standard 8081 port for listening. Kafka health probe is enabled by default:\n# Configuration file. quarkus.http.port=8081 # Configure the log level. quarkus.log.level=INFO quarkus.log.console.level=INFO # Configure kafka integration into health probe. quarkus.kafka.health.enabled=true Components connection The Async Minion component should know how to connect to Microcks. Keycloak/IDP connection is discovered dyanmically from Microcks or can be overriden at the local level (commented by default):\n# Access to Microcks API server. io.github.microcks.minion.async.client.MicrocksAPIConnector/mp-rest/url=http://localhost:8080 microcks.serviceaccount=microcks-serviceaccount microcks.serviceaccount.credentials=ab54d329-e435-41ae-a900-ec6b3fe15c54 # Access to Keycloak URL if you override the one coming from Microcks config #keycloak.auth.url=http://localhost:8180 Kafka connection The Async Minion -in the stadnard distribution- is connecting to a Kafka broker to receive the service change events. If connecting to a Schema Regsitry (see this guide), the Confluent compatibility mode is the one selected by default:\n# Access to Kafka broker. kafka.bootstrap.servers=localhost:9092 # For Apicurio registry #kafka.schema.registry.url=http://localhost:8888 #kafka.schema.registry.confluent=false # For Confluent registry #kafka.schema.registry.url=http://localhost:8889 kafka.schema.registry.confluent=true kafka.schema.registry.username= kafka.schema.registry.credentials.source=USER_INFO mp.messaging.incoming.microcks-services-updates.connector=smallrye-kafka mp.messaging.incoming.microcks-services-updates.topic=microcks-services-updates mp.messaging.incoming.microcks-services-updates.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer mp.messaging.incoming.microcks-services-updates.value.deserializer=io.github.microcks.minion.async.client.ServiceViewChangeEventDeserializer # Do not save any consumer-offset on the broker as there\u0026#39;s a re-sync on each minion startup. mp.messaging.incoming.microcks-services-updates.enable.auto.commit=false mp.messaging.incoming.microcks-services-updates.bootstrap.servers=localhost:9092 Optional brokers You can connect the Async Minion to additional event messages brokers using the properties section below. By design, the actual connection is done only at message transmission time and not at startup.\n# Access to NATS broker. nats.server=localhost:4222 nats.username=microcks nats.password=microcks # Access to MQTT broker. mqtt.server=localhost:1883 mqtt.username=microcks mqtt.password=microcks # Access to RabbitMQ broker. amqp.server=localhost:5672 amqp.username=microcks amqp.password=microcks # Access to Google PubSub. googlepubsub.project=my-project googlepubsub.service-account-location=/deployments/config/googlecloud-service-account.json # Access to Amazon SQS amazonsqs.region=eu-west-3 amazonsqs.credentials-type=env-variable #amazonsqs.credentials-type=profile amazonsqs.credentials-profile-name=microcks-sqs-admin amazonsqs.credentials-profile-location=/deployments/config/amazon-sqs/aws.profile #amazonsqs.endpoint-override=http://localhost:4566 # Access to Amazon SNS amazonsns.region=eu-west-3 amazonsns.credentials-type=env-variable #amazonsns.credentials-type=profile amazonsns.credentials-profile-name=microcks-sns-admin amazonsns.credentials-profile-location=/deployments/config/amazon-sns/aws.profile #amazonsns.endpoint-override=http://localhost:4566 "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/","title":"Installation","description":"Here below all the guides related to **Installation**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/","title":"Tutorials","description":"Tutorials","searchKeyword":"","content":"Tutorials: Learn how Microcks works Welcome to Microcks Tutorials! Our Tutorials section teaches beginner processes with Microcks by doing.\n💡 Remember Contribute to Microcks Tutorials\nCode isn\u0026rsquo;t the only way to contribute to OSS; Dev Docs are a huge help that benefit the entire OSS ecosystem. At Microcks, we value Doc contributions as much as every other type of contribution. ❤️\nTo get started as a Docs contributor:\nFamiliarize yourself with our project\u0026rsquo;s Contribution Guide and our Code of Conduct Head over to our Microcks Docs Board Pick an issue you would like to contribute to and leave a comment introducing yourself. This is also the perfect place to leave any questions you may have on how to get started If there is no work done in that Docs issue yet, feel free to open a PR and get started! Docs contributor questions\nDo you have a documentation contributor question and you\u0026rsquo;re wondering how to tag me into a GitHub discussion or PR? Have no fear!\nJoin us on Discord and use the #documentation channel to ping us!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/overview/what-is-microcks/","title":"What is Microcks?","description":"","searchKeyword":"","content":"Microcks is a tool for mocking and testing your APIs and microservices. It leverages API standards to provide a uniform and multi-protocol approach for simulating complex distributed environments and validating service components in isolation.\nMicrocks facilitates rapid simulation generation, automated API testing, and seamless CI/CD integration, streamlining development and deployment processes. Microcks empowers teams to optimize services and accelerate product releases, gaining a competitive edge.\nMicrocks is a Cloud Native Computing Foundation (CNCF) Sandbox project and a 100% Open Source and community driven initiative.\nWho can use Microcks for? Depending on your profile, you way use Microcks to gain different advantages:\nFor API Owners Get instant feedback on design iterations with Microcks powered simulations Leverage Open Standards and ensure re-use for easy communication Share simulations and conformance certification tests kit with your teams and partners Assess and monitor conformance quality risks of your API and services portfolio For Developers Use Microcks on your laptop to simulate API and services dependencies Leverage your API specifications, GraphQL or gRPC schemas and collections to get free mocks Write Integration Tests the easy-way with our Testcontainers module Get free, no-code contract-testing for all your API and services versions For Quality Assurance Compose your tests and simulation datasets the way you want via powerful multi-artifacts support Auto-discover new API or services versions and update datasets via Git integration Trigger conformance tests in CI/CD of the API at each and every commit Automate everything via Microcks’ powerful API For Platform Engineers Provide testing environments as-a-service, at very low cost Guarantee flexibility and scalability thanks to Kubernetes-native deployments Integrate Microcks into your Internal Developer Portal powered by Backstage or Kratix Deploy Sandboxes for internals, partners, customers the easy-way via GitOps "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/administration/organizing-repository/","title":"Organizing Repository","description":"","searchKeyword":"","content":"Overview This guide walks through the different techniques for organinzing your API \u0026amp; Services repository content in Microcks. As you import more and more artifacts into Microcks, it can be come difficult to find the API you\u0026rsquo;re looking for! Microcks proposes handling by putting labels 🏷️ on APIs \u0026amp; Services or Importer Jobs of your repository. Labels are a very flexible way to map your own organizational structures with loose coupling.\nThis guide will show 3 techniques that are using labels to enhance the organization of your repository. These techniques are progressive and you decide applying the first one without pursuing on others. However, applying the third ones requires to have adopted the previous ones.\n1️⃣ We will apply labels labels to different objects in order to add categorization informations,\n2️⃣ From there, we can then define a master filter for our repository, choosing a discriminetion criterion,\n3️⃣ From tehre, we can also segment the management permsissions to different users.\n🚨 Prerequisites\nLabels setup and management require changing its configuration and accessing it with the manager or admin role. Be sure to be able to have access or ask some help to your admin.\nLet’s jump in! 🏂\n1. Applying labels Generally speaking, labels 🏷️ are key/value pairs that are attached to objects, such as APIs \u0026amp; Services or Importer Jobs. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to your business or organization, but do not directly imply semantics to the core system.\nSome example labels that may suit your classification needs:\ndomain may represent the business or application domain this API belongs to. Example values: customer, finance, sales, shipping \u0026hellip; status may represent the status of the API in the lifecycle. Example values: wip, preview, GA, deprecated, retired \u0026hellip; type or pattern may represent the pattern involved in the API implementation. Example values: proxy, composition, assembly \u0026hellip; team may represent the owner team for this API. Example values: team-A, team-B \u0026hellip; Labels can be attached at creation time and subsequently added and modified at any time. Each APIs \u0026amp; Services can have a set of key/value labels defined and each key must be unique for a given object.\nWhen accessing API details, labels are displayed with global metadata at the top level with the Manage labels\u0026hellip; link:\nLabel management is done through the dialog shown below when one can easily add new labels or remove existing ones.\n2. Filtering repository content Labels can also be used to select subsets of APIs \u0026amp; Services.\nMicrocks does not impose any labels or way of modeling them 😉 However, for now it applies one level filtering in its UI using one master label you define as the most important. Below an example of what you\u0026rsquo;ve got on the UI side when defining Domain as the main label:\nMicrocks administrator can configure one label as being the main one, the master that will be used for first level filtering in the Services list page of the Microcks web UI.\nFor that, we rely on the features.properties configuration file found on the server side. Here\u0026rsquo;s below the portion of features.properties configuration used for enabling repository-filter and having the results shown in the capture just above. You\u0026rsquo;ll see that we use domain as the main label and that we only display domain and status labels on the Services list page:\nfeatures.feature.repository-filter.enabled=true features.feature.repository-filter.label-key=domain features.feature.repository-filter.label-label=Domain features.feature.repository-filter.label-list=domain,status 💡 You may check the Application Configuration reference documentation to get comprehensive list and explanations of above properties.\n3. Segmenting management responsibilities The final techniques of repository organization is to distribute/segment the management permissions between different users.\nAs an example, if you defined the domain label as the master with customer, finance and sales values, you\u0026rsquo;ll be able to define users with the manager role only for the APIs \u0026amp; Services that have been labeled accordingly. Sarah may be defined as a manager for domain=customer and domain=finance services, while John may be defined as the manager for domain=sales APIs \u0026amp; services.\nFor that, we rely on the features.properties configuration file found on the server side Here\u0026rsquo;s below the portion of features.properties configuration used for enabling repository-tenancy:\nfeatures.feature.repository-tenancy.enabled=true features.feature.repository-tenancy.artifact-import-allowed-roles=admin,manager,manager-any As an administrator of the Microcks instance, you can now assign users to different groups using Users Management capabilities within Microcks Web UI.\n💡 You may check the Security Configuration reference documentation to get comprehensive list and explanations of above properties.\nWrap-up Walking this guide, you have learned the different means available for organizing your API \u0026amp; Services repository thanks to labels 🏷️. It\u0026rsquo;s important to note that labels are saved into Microcks database and not replaced by a new import of your Service or API definition. They can be independently set and updated using the Microcks APIs, Microcks Metadata, OpenAPI extensions or AsyncAPI extensions.\nYou may follow-up this guide with the one related to Managing Users or Snapshotting/restoring your Repository\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/importing-content/","title":"Importing Services & APIs","description":"","searchKeyword":"","content":"Overview This guide will show you and discuss the different options for importing Services and APIs into Microcks. There is basically two different ways of putting new content into Microcks:\nPushing content to Microcks via Upload, Having Microcks pulling content via Importer. We will see the different ways of doing things as weel of the pros and cons of the different methods.\n1. Import content via Upload Via the UI The most simple way of adding new Services or APIs mocks to your Microcks instance is by directly uploading the artifact. From the left vertical navigation bar, just select the Importers menu entry and then choose Upload. You\u0026rsquo;ll then see a dialog window allowing you to browse your filesystem and pick a new file to upload.\n💡 You can also specify whether this artifact should be considered as primary or secondary per the Multi Artifacts support. In the case of a secondary artifact, you may check the Just merge examples into existing API | Service definition box.\nHit the Upload green button. An upload followed by an artifact import should occur with notification messages appearing on the top right corner. Newly discovered Services and APIs can be found into the APIs | Services repository.\nWhile this method is very convenient for a quick test, you\u0026rsquo;ll have to re-import your artifact file on every new change\u0026hellip;\nVia the API The same thing can be done via Microcks\u0026rsquo; own API. Be sure to start reading the Connecting to Microcks API guide first, and to retrieve a token by running the authentication flow. The Service Account you use for this operation is required to have the manager role - that is not the case of the default one as explained in Inspecting default Service Account.\nOnce you have the $TOKEN issued for the correct account, uploading a new Artifact is just a matter of executing this curl command:\n# Uploading a local artifact. curl \u0026#39;https://microcks.example.com/api/artifacts/upload?mainArtifact=true\u0026#39; -H \u0026#34;Authorization: Bearer $TOKEN\u0026#34; -F \u0026#39;file=@samples/films.graphql\u0026#39; -k Configure dependency resolution Direct upload is straightforward and quick to realize but comes with one caveat: it does not allow you to automatically resolve dependencies. For example, if your artifact file uses external references with relative paths, Microcks is not able to resolve this external reference - by default.\nAs a workaround to this limitation, and since Microcks 1.10.1, we introduced a new default-artifacts-repository.url property that takes the value of DEFAULT_ARTIFACTS_REPOSITORY_URL environment variable when defined. It can be set to either an HTTP endpoint (starting with http[s]://) or a file endpoint (starting with file://). This default repository for artifats will be used as the default location for Microcks to resolve relative dependencies.\ndefault-artifacts-repository.url=${DEFAULT_ARTIFACTS_REPOSITORY_URL:#{null}} 💡 For local development purposes, this is super convenient to use a very small HTTP server running on your laptop or a common folder mounted into Microcks container as this default artifacts repository.\n2. Import content via Importer Another way of adding new Services or APIs mocks is by scheduling an Importer Job into Microcks. We think its actually the best way to achieve continuous, iterative and incremental discovery of your Services and APIs. The principle is very simple: you save your artifact file into the Git repository of your choice (public or private) and Microcks will take care of periodically checking if changes have been applied and new mock or services definitions are present in your artifact. The nice thing about using Importer is that external files referenced in the target artifact will be automatically resolved for you.\n💡 Though we think that Git repositories (or other version control systems) are the best place to keep such artifacts, Microcks only requires a simple HTTP service. So you may store your artifact on a simple filesystem as long as it is reachable using HTTP.\nStill from the left vertical navigation bar, just select the Importers menu entry to see the list of existing importers.\nCreating a new scheduled import You may declare a new Importer job by hitting the Create button.\nA wizard modal then appears as creating an Importer is a 3-steps process. The first step is about mandatory basic properties such as the name of your Importer and the repository URL it will use to check for discovering API mocks.\n💡 You can also specify whether this artifact should be considered as primary or secondary per the Multi Artifacts support. In the case of a secondary artifact, you may check the Just merge examples into existing API | Service definition box.\nThe second step is about authentication options for accessing the repository. Depending on the type of repository (public or private) you may need to enable/disable certificate validation as well as manage an authentication process through the usage of a Secret. Check the guide on External Secrets for more info.\nFinally the review displays a summary before creating the Importer Job.\nManaging scheduled importers At creation time, the importer job is automatically Scanned and Imported.\nOnce created, importer jobs can be managed, activated or forced through this screen. You\u0026rsquo;ll see colored marker for each job line:\nScanned means that the job is actually scheduled for next importation run. Otherwise Inactive will be displayed. Imported means that the job has been successfully imported on previous run. Otherwise Last import errors will be displayed with a popup showing the last error, Services is a shortcut to access the services definitions discovered by this job. Using the 3-dotted menu, you can easily enable/disable or force the job.\nConfigure scheduling interval The scheduling interval can be globally configured for all the Jobs. It is a global setting and not a per-Job one. This is achieved through the services.update.interval property in the application.properties configuration file that takes the value of SERVICES_UPDATE_INTERVAL environment variable. The value should be set to a valid CRON expression ; default is every 2 hours.\nservices.update.interval=${SERVICES_UPDATE_INTERVAL:0 0 0/2 * * *} Wrap-up Importing new content into Microcks can be done in several ways: UI, CLI or API.\nWhile pushing local content is very convenient for immediate definition and local development updates, setting up an importer job is the best way to achieve continuous, iterative and incremental discovery of your Services and APIs.\nMaking Microcks pull your artifacts also allows advanced resolution of dependencies, which can be mandatory when your OpenAPI or AsyncAPI artifacts are using $ref.\nAs an import can be scheduled and can take a little time, it is done asynchronously regarding the human interaction that has triggered it. We choose not to have a blocking process for error management: Microcks importers will try to discover and import services but will die silently in case of any failure. We also think that this also promotes iterative and incremental way of working: you know that your job will gracefully fail if your new samples are not yet complete.\nSome of the error messages will be reported through the Last import errors status but some not\u0026hellip; To help you in checking your artifacts for compliance with recommended practices and conventions, we\u0026rsquo;re developing the Microcks Linter Ruleset.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/administration/users/","title":"Managing Users","description":"","searchKeyword":"","content":"Overview This guide will show you how to use the Users Management capabilities of the Microcks Web UI.\nYour can partially manage users directly from the Microcks UI. \u0026ldquo;Partially\u0026rdquo; means that you are able to manage a user\u0026rsquo;s rolesand groups within Microcks but that you\u0026rsquo;re not able to create a new user. This action is reserved to your Identity Provider used through Keycloak configuration or to Keycloak itself if you choose to use it as a provider. Please check the Identity Management section of the Security Configuration reference for more information on that.\n🚨 Prerequisites\nUsers can only be managed by Microcks admin - we mean people having the admin role assigned. In order to be able to retrieve the list of users and operate changes, the user should also have manage-users and manage-clients roles from realm-management Keycloak internal client. See Keycloak documentation for more on this point.\n1. Roles management Users management is simply a thumbnail with the Administration page that is available from the vertical menu on the left once logged in as administrator.\nOn this page, you can easily search users using their name and they\u0026rsquo;ll be listed, organized in pages. On each line of the list, you\u0026rsquo;ll have the oportunity to check the different roles endorsed by a user.\nRegistred means that the user has already sign-in within Microcks and has been just endorsed with the user role, Manager means that the user has been ensorded with the manager Microcks role, Admin means that the user has been ensorded with the admin Microcks role. From the 3-dots menu on the end of the line, you have the ability to Add or Remove the different roles.\n💡 If you encounter any error while fetching users or roles, this probably means that your roles on realm-management Keycloak internal client are not correctly setup. Please check this part.\n2. Groups membership management If you have enabled the segmentation of management roles on a master label you have chosen for organizing your repository (see Organizing Repository), you will also be able to assign groups memberships for managers.\nWhen this feature is enabled, Microcks will create as many groups in Keycloak as we have different values for this master label. These groups are organized in a hierarchy so that you\u0026rsquo;ll have groups with such names /microcks/manager/\u0026lt;value\u0026gt; those members represents the manager of the resources labeled with \u0026lt;value\u0026gt; value.\nAlso, a new Manage Groups options appears in the option menu for each user. From this new modal window, you can easily manage group membership for a specified user as shown below:\n🚨 The groups in Keycloak are actually synchronized lazyly each time an administrator visits to this page. For some unknown reasons, it appears that the sync can be delayed from time to time. Before raising an issue, please visit another page and come back to this one. 😉\nWrap-up This guide walks you through the Users Management capabilities that are available on Microcks Web UI. We hope you learned how straightforward it is to manage users roles and groups, once your administrator users has the corrent Keycloak realm-management client correct roles.\nFeel free to pursue your exploration with Security Configuration reference for all the things related to Identity Management or security in general.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/apis/open-api/","title":"Microcks' OpenAPI","description":"","searchKeyword":"","content":"As a tool focused on APIs, Microcks also offers its own API that allows you to query its datastore and control the import jobs and configuration objects. You may use this API from your automation tool to dynamically launch new tests, register new mocks or globally control your Microcks server configuration.\nThe Swagger-UI below allows you to browse and discover the various API endpoints.\nPrevious releases of the API definitions can be found in the GitHub repository.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/getting-started/","title":"Getting started","description":"","searchKeyword":"","content":"Quickstart In this tutorial, you will discover Microcks mocking features by re-using a simple REST API sample. For that: you will run Microcks on your local machine, then load a sample provided by the Microcks team, explore the web user interface and then interact with an API mock.\nThe easiest way to get started with Microcks is using Docker or Podman with our ephemral all-in-one Microcks distribution.\nIn your terminal issue the following command - maybe replacing 8585 by another port of your choice if this one is not free:\n$ docker run -p 8585:8080 -it --rm quay.io/microcks/microcks-uber:latest-native This will pull and spin the uber container and setup a simple environment for you to use. You shoud get something like this on your terminal:\n[...] . ____ _ __ _ _ /\\\\ / ___\u0026#39;_ __ _ _(_)_ __ __ _ \\ \\ \\ \\ ( ( )\\___ | \u0026#39;_ | \u0026#39;_| | \u0026#39;_ \\/ _` | \\ \\ \\ \\ \\\\/ ___)| |_)| | | | | || (_| | ) ) ) ) \u0026#39; |____| .__|_| |_|_| |_\\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.2.1) 14:51:07.473 INFO 1 --- [ main] i.g.microcks.MicrocksApplication : Starting AOT-processed MicrocksApplication using Java 17.0.10 with PID 1 (/workspace/io.github.microcks.MicrocksApplication started by cnb in /workspace) 14:51:07.473 INFO 1 --- [ main] i.g.microcks.MicrocksApplication : The following 1 profile is active: \u0026#34;uber\u0026#34; 14:51:07.520 INFO 1 --- [ main] i.g.microcks.config.WebConfiguration : Starting web application configuration, using profiles: [uber] 14:51:07.520 INFO 1 --- [ main] i.g.microcks.config.WebConfiguration : Web application fully configured [...] 14:51:07.637 INFO 1 --- [ main] i.g.m.util.grpc.GrpcServerStarter : GRPC Server started on port 9090 14:51:07.640 INFO 1 --- [ main] i.g.m.config.AICopilotConfiguration : AICopilot is disabled 14:51:07.682 INFO 1 --- [ main] i.g.m.config.SecurityConfiguration : Starting security configuration 14:51:07.682 INFO 1 --- [ main] i.g.m.config.SecurityConfiguration : Keycloak is disabled, permitting all requests 14:51:07.755 INFO 1 --- [ main] i.g.microcks.MicrocksApplication : Started MicrocksApplication in 0.296 seconds (process running for 0.303) Open a new browser tab and point to the http://localhost:8585 endpoint - or other port you choose to access Microcks.\nUsing Microcks Now you are ready to use Microcks for deploying your own services and API mocks! Before that let\u0026rsquo;s have the look at the application home screen and introduce the main concepts. Using the application URL after installation, we should land on this page with two main entry points : APIs | Services and Importers.\nAs you may have guessed, APIs | Services is for browsing your Services and API repository, discovering and accessing documentation, mocks, and tests. Importers will help you to populate your repository, allowing you to define Jobs that periodically scan your Git or simple HTTP repositories for new artifacts, parse them and integrate them into your Services and API repository. In fact Importers help you discover both new and modified Services. Before using your own service definition files, let\u0026rsquo;s load some samples into Microcks for a test ride!\nLoading a Sample We provide different samples that illustrate the different capabilities of Microcks on different protocols. Samples can be loaded via Importers like stated above but also via the Microcks Hub entry in the vertical menu on the left.\nAmong the different tiles on this screen, choose the MicrocksIO Samples API one that will give your access to the list of available samples. For getting started with Microcks, we\u0026rsquo;re going to explore the Pastry API - 2.0 that is a simple REST API. Select it from the list of available APIs on the bottom right:\nOn the following screen, click the big blue Install button where you will choose the + Direct import method.\nViewing an API When import is done, a new API has been discovered and added to your repository. You should have the result below with the two notifications toast on the top right.\nYou can then click the green ✓ Go button - or now visit the API | Services menu entry - to access the Pastry API - 2.0 details:\nYou\u0026rsquo;ll be able to access the details, documentation and request/response samples for each operation/resource in the screen below. One important bit of information here is the Mocks URL field: this is the endpoint where Microcks automatically deploy a mock for this operation. The table just below shows request/response pairs and a detailed URL with the HTTP verb showing how to invoke this mock.\nInteracting with a Mock At the end of the Mock URL line, you\u0026rsquo;ll notice two icons and buttons. The first one allows you to copy the URL to the clipboard so that you can directly use it in a browser for example. The second one allows you to get a curl command to interact with the mocked API from the terminal. You can copy the URL for the Millefeuille example and give it a try in your terminal:\n$ curl -X GET \u0026#39;http://localhost:8585/rest/API+Pastry+-+2.0/2.0.0/pastry/Millefeuille\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; {\u0026#34;name\u0026#34;:\u0026#34;Millefeuille\u0026#34;,\u0026#34;description\u0026#34;:\u0026#34;Delicieux Millefeuille pas calorique du tout\u0026#34;,\u0026#34;size\u0026#34;:\u0026#34;L\u0026#34;,\u0026#34;price\u0026#34;:4.4,\u0026#34;status\u0026#34;:\u0026#34;available\u0026#34;} Ta Dam! 🎉\nWhat\u0026rsquo;s next? Now that you have basic information on how to setup and use Microcks, you can go further with:\nImporting additional samples from MicrocksIO Samples API in the Microcks Hub, Continuing your tour with Getting started with Tests, Writing your own artifacts files and creating: your first OpenAPI mock, your first GraphQL mock, your first gRPC mock, or your first AsyncAPI mock with Kafka. "},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/asyncapi-conventions/","title":"AsyncAPI Conventions","description":"","searchKeyword":"","content":"Conventions In addition of schema information, Microcks uses AsyncAPI Message Example Objects to produce example messages for mocking purpose.\nFor AsyncAPI 2.x document, the name attribute of example is mandatory so that Microcks reuses this name to identify available mock messages. Starting with AsyncAPI 3.0, the name is no longer mandatory and Microcks can then compute a name for you based on the message name and the index of example in the list.\nFor each version of an API managed by Microcks, it will create appropriate destination for operations in mixing specification elements, protocol binding specifics and versioning issues. Destination managed by Microcks are then referenced within the API details page.\nBindings AsyncAPI specification dissociates the concern of message description (through payload and headers schemas) from the concern of servers and protocol bindings. A same API may have different bindings allowing to specify protocol specific issues like queue or topic naming, serialization format and so on.\nMicrocks supports following bindings:\nKAFKA binding - the default if you don\u0026rsquo;t explicitly define a binding into your AsyncAPI document, WS (for WebSocket) - is directly handled by Microcks and you don\u0026rsquo;t need additional broker or server, MQTT will be active if Microcks is connected to a MQTT broker, AMQP will be active if Microcks is connected to a RabbitMQ or AMQP 0.9 comptaible broker, NATS will be active if Microcks is connected to a NATS broker,, GOOGLEPUBSUB will be active if Microcks is connected to a Google Cloud PubSub service, SQS will be active if Microcks is connected to an AWS SQS service (LocalStack can be used), SNS will be active if Microcks is connected to an AWS SQS service (LocalStack can be used). For each, channel within your AsyncAPI specification, Microcks will create and manage destination on the connected brokers where bindings are defined.\nThose destinations will be named with the following convention to avoid collisions between different APIs or versions:\n\u0026lt;sanitized_API_name\u0026gt;(-|/)\u0026lt;API_version\u0026gt;(-|/)\u0026lt;sanitized_operation\u0026gt;[(-|/)\u0026lt;channel_path\u0026gt;] Channel parameters Microcks supports templatized channel endpoints using parameter like {id} in their name. Support of parameter for AsyncAPI 2.x presents some restriction though.\nAsyncAPI v2.x Microcks only supports static parameter definition for AsyncAPI v2.x. That means that for a parameter, you also need to specify the possible different values with examples.\nLet\u0026rsquo;s imagine a basic Chat Room channel. In order to have the different msesages (Example 1, Example 2 and Example 3) dispatched on different rooms, you\u0026rsquo;ll have to define the different values for the roomId parameter for those example. Like illustrated below:\nchannels: /chat/{roomId}: parameters: idRoom: description: Identifier of the chat room schema: type: string examples: Example 1: value: 1 Example 2: value: 2 Example 3: value: 2 [...] components: messages: chatMessage: payload: $ref: \u0026#39;#/components/schemas/ChatMessageType\u0026#39; examples: - name: Example 1 payload: message: Hello - name: Example 2 payload: message: Bonjour - name: Example 3 payload: message: Namaste Starting with Microcks 1.11.0, you\u0026rsquo;ll also have access to a notation that is much more aligned with JSON Schema constraints on schema.examples definition being an array::\nchannels: /chat/{roomId}: parameters: idRoom: description: Identifier of the chat room schema: type: string examples: - Example 1: value: 1 - Example 2: value: 2 - Example 3: value: 2 [...] or to a shorcut notation we introduced with AsyncAPI v3.x importer. This shorcut notation allows you to define example name and value using name:value items like illustrated below:\nchannels: /chat/{roomId}: parameters: idRoom: description: Identifier of the chat room schema: type: string examples: - \u0026#39;Example 1:1\u0026#39; - \u0026#39;Example 2:2\u0026#39; - \u0026#39;Example 3:2\u0026#39; [...] AsyncAPI v3.x For AsyncAPI v3.x, Microcks still supports static parameter definition like for AsyncAPI v2.X but also provides support for dynamic parameter definition using the location attribute.\nLet\u0026rsquo;s reuse our basic Chat Room channel. The location attribute allows directly retrieving the roomId value from the message payload so that you don\u0026rsquo;t have to specify alues for the parameter. Also, as Microcks supports AsyncAPI v3 examples without names, the examples no longer need to have name attributes in that case (because we don\u0026rsquo;t need a key to match payload and parameter values).\nchannels: chatRoom: address: /chat/{roomId} parameters: idRoom: description: Identifier of the chat room location: $message.payload#/roomId [...] components: messages: chatMessage: payload: $ref: \u0026#39;#/components/schemas/ChatMessageType\u0026#39; examples: - payload: message: Hello room: 1 - payload: message: Bonjour room: 2 - payload: message: Namaste room: 2 Illustration We will illustrate how Microcks is using OpenAPI specification through a User signed-up API sample that is inspired by one of AsyncAPI tutorial. The specification file in YAML format can be found here. This is a single SUBSCRIBE operation API that defines the format of events that are published when a User signed-up an application.\nSpecifying messages Sample messages are defined within your specification document, simply using the examples attribute like marked below:\nchannels: user/signedup: description: The topic on which user signed up events may be consumed subscribe: summary: Receive informations about user signed up operationId: receivedUserSIgnedUp message: description: An event describing that a user just signed up. traits: - $ref: \u0026#39;#/components/messageTraits/commonHeaders\u0026#39; payload: [...] examples: # \u0026lt;= Where we\u0026#39;ll define sample messages for this operation Examples will be an array of example objects.\nPayload Payload is expressed into the mandatory payload attribute, directly in YAML or by embedding JSON. In our illustration, we will define below 2 examples with straightforward summary:\nexamples: - name: laurent summary: Example for Laurent user payload: |- {\u0026#34;id\u0026#34;: \u0026#34;{{randomString(32)}}\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;{{now()}}\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} - name: john: summary: Example for John Doe user payload: id: \u0026#39;{{randomString(32)}}\u0026#39; sendAt: \u0026#39;{{now()}}\u0026#39; fullName: John Doe email: [email protected] age: 36 Headers Headers are expressed into the optional headers attribute, directly in YAML or by embedding JSON. In our illustration, we will define below 2 examples using both methods:\nexamples: - name: laurent [...] headers: |- {\u0026#34;my-app-header\u0026#34;: 23} - name: john [...] headers: my-app-header: 24 Channel/endpoint names Given the following AsyncAPI specfiication:\nasyncapi: \u0026#39;2.1.0\u0026#39; info: title: User signed-up API version: 0.1.1 description: This service is in charge of processing user signups channels: user/signedup: subscribe: Microcks will detect an operation named SUBSCRIBE user/signedup and create destinations than integrates service name and version, channel name and protocol specific formmatting. For example, it will create a Kafka topic named UsersignedupAPI-0.1.1-user-signedup or a WebScoket endpoint named /ws/User+signed-up+API/0.1.1/user/signedup. Destination and endpoint names for the different protocols are available on the page presenting API details.\nAsyncAPI extensions Microcks proposes custom AsyncAPI extensions to specify mocks organizational or behavioral elements that cannot be deduced directly from AsyncAPI document.\nAt the info level of your AsyncAPI document, you can add labels specifications that will be used in organizing the Microcks repository. See below illustration and the use of x-microcks extension:\nasyncapi: \u0026#39;2.1.0\u0026#39; info: title: User signed-up API version: 0.1.1 description: This service is in charge of processing user signups x-microcks: labels: domain: authentication status: GA team: Team B [...] At the operation level of your AsyncAPI document, we could add frequency that is the interval of time in seconds between 2 publications of mock messages.. Let\u0026rsquo;s give an example for OpenAPI using the x-microcks-operation extension:\n[...] channels: user/signedup: subscribe: x-microcks-operation: frequency: 30 message: $ref: \u0026#39;#/components/messages/UserSignedUp\u0026#39; [...] In AsyncAPI v3.x, operation are now differentiated from channels. Our extension is still called x-microcks-operation and should live at the operation level like illustrated below:\n[...] channels: user-signedup: messages: userSignedUp: $ref: \u0026#39;#/components/messages/userSignedUp\u0026#39; operations: publishUserSignedUps: action: \u0026#39;send\u0026#39; channel: $ref: \u0026#39;#/channels/user-signedup\u0026#39; messages: - $ref: \u0026#39;#/channels/user-signedup/messages/userSignedUp\u0026#39; x-microcks-operation: frequency: 30 [...] Once labels and frequency are defined that way, they will overwrite the different customizations you may have done through UI or API during the next import of the AsyncAPI document.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/integration/microcks-hub/","title":"Integrating with Microcks Hub","description":"","searchKeyword":"","content":" 🪄 To Be Created\nThis is a new documentation page that has to be written as part of our Refactoring Effort.\nGoal of this page\n\u0026hellip; "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/custom-dispatchers/","title":"Setting custom dispatcher","description":"","searchKeyword":"","content":" 🪄 To Be Created\nThis is a new documentation page that has to be written as part of our Refactoring Effort.\nGoal of this page\nLoad a sample API and explain on how to proceed via the UI Add x-microcks-operation attributes and re-load the API to check the effect Discuss the availability of API Metadata or API call to do that "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/cli/","title":"Using Microcks CLI","description":"","searchKeyword":"","content":"Overview This guide illustrates usage of microcks-cli, a command-line tool for interacting with Microcks APIs. It allows to launch tests or import API artifacts with minimal dependencies. It is managed and released independently of the core Microcks server components within its own GitHub repository. The CLI connects to API and uses Service Account and so it\u0026rsquo;s definitely worth the read 😉\nThe CLI also supports authenticated and non-authenticated mode when Microcks is deployed without Keycloak. You\u0026rsquo;ll still have to provide client id and secret to commands but they will be ignored. See issue #23 for more details.\n1. Install the CLI The CLI is provided as a binary distribution or can be used directly through a container image.\nBinary distribution The CLI binary releases are available for Linux, MacOS or Windows platform with different architectures on GitHub releases. Just download the binary corresponding to your system and put the binary into the PATH somewhere. For example, on a Linux platform with amd64 architecture, you may run these commands:\ncurl -Lo microcks-cli https://github.com/microcks/microcks-cli/releases/download/0.5.5/microcks-cli-darwin-amd64 \\ \u0026amp;\u0026amp; chmod +x microcks-cli Container image The microcks-cli is also available as a container image so that you may run it without installing it. The hosting repository is on Quay.io. You can just simply pull the image to get it locally:\ndocker pull quay.io/microcks/microcks-cli:latest 2. Launching a test Assuming you are running the same examples than in the Getting started and Getting started with Tests tutorials, you may use this command line to launch a new test:\nmicrocks-cli test \u0026#39;API Pastry - 2.0:2.0.0\u0026#39; http://host.docker.internal:8282 OPEN_API_SCHEMA \\ --microcksURL=http://host.docker.internal:8585/api/ \\ --keycloakClientId=microcks-serviceaccount \\ --keycloakClientSecret=ab54d329-e435-41ae-a900-ec6b3fe15c54 \\ --operationsHeaders=\u0026#39;{\u0026#34;globals\u0026#34;: [{\u0026#34;name\u0026#34;: \u0026#34;x-api-key\u0026#34;, \u0026#34;values\u0026#34;: \u0026#34;azertyuiop\u0026#34;}], \u0026#34;GET /pastries\u0026#34;: [{\u0026#34;name\u0026#34;: \u0026#34;x-trace-id\u0026#34;, \u0026#34;values\u0026#34;: \u0026#34;qsdfghjklm\u0026#34;}]}\u0026#39; \\ --insecure --waitFor=6sec With some explanations on arguments and flags:\n1st argument is API name and version separated with :, 2nd argument is the Application endpoint to test, 3rd argument is the testing strategy to execute, --flags are contextual flags for API endpoints, authentication, timeouts, etc. The same command can be also executed using the container image:\ndocker run -it quay.io/microcks/microcks-cli:latest microcks-cli test \\ \u0026#39;API Pastry - 2.0:2.0.0\u0026#39; http://host.docker.internal:8282 OPEN_API_SCHEMA \\ --microcksURL=http://host.docker.internal:8585/api/ \\ --keycloakClientId=microcks-serviceaccount \\ --keycloakClientSecret=ab54d329-e435-41ae-a900-ec6b3fe15c54 \\ --operationsHeaders=\u0026#39;{\u0026#34;globals\u0026#34;: [{\u0026#34;name\u0026#34;: \u0026#34;x-api-key\u0026#34;, \u0026#34;values\u0026#34;: \u0026#34;azertyuiop\u0026#34;}], \u0026#34;GET /pastries\u0026#34;: [{\u0026#34;name\u0026#34;: \u0026#34;x-trace-id\u0026#34;, \u0026#34;values\u0026#34;: \u0026#34;qsdfghjklm\u0026#34;}]}\u0026#39; \\ --insecure --waitFor=6sec Check the microcks-cli README for full instructions on arguments and flags.\nWrap-up You have learned how to install and use the Microcks CLI for the basic task of launching a new test. This is waht you would typically do within your CI/CD pipeline to ensure the application you just deployed correctly implements API specifications.\nMicrocks CLI also provide the import command that allows you to push artifacts into Microcks repository. This command requires that you have a Service Account with more privileges than the default one though. You may follow-up this guide with learning more about Service Accounts.\nThe CLI provides the helpful commands version and help to get basic informations on it. Check the microcks-cli README for full instructions on available commands depending your version.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/docker-compose/","title":"With Docker Compose","description":"","searchKeyword":"","content":"This guide shows you how to install and run Microcks using Docker Compose.\nDocker Compose is a tool for easily testing and running multi-container applications. Microcks offers a simple way to set up the minimal required containers to have a functional environment on your local computer.\nUsage To get started, make sure you have Docker installed on your system.\nIn your terminal issue the following commands:\nClone this repository. git clone https://github.com/microcks/microcks.git --depth 10 Change to the install folder cd microcks/install/docker-compose Spin up the containers docker compose up -d This will start the required containers and setup a simple environment for you to use.\nOpen a new browser tab and point to the http://localhost:8080 endpoint. This will redirect you to the Keycloak sign-in page for login. Use the following default credentials to login into the application:\nUsername: admin Password: microcks123 You will be redirected to the main dashboard page.\nEnabling Asynchronous API features Support for Asynchronous API features of Microcks are not enabled by default into the docker-compose.yml file. If you feel your local machine has enough resources to afford it, you can enable them using a slightly different command line.\nIn your terminal use the following command instead:\ndocker compose -f docker-compose.yml -f docker-compose-async-addon.yml up -d Docker compose is now launching additional containers, namely zookeeper, kafka and the microcks-async-minion. The above command should produce the following output:\nCreating network \u0026#34;docker-compose_default\u0026#34; with the default driver Creating microcks-zookeeper ... done Creating microcks-db ... done Creating microcks-sso ... done Creating microcks-postman-runtime ... done Creating microcks ... done Creating microcks-kafka ... done Creating microcks-async-minion ... done You may want to check our blog post for a detailed walkthrough on starting Async features on docker-compose.\nIf you\u0026rsquo;re feeling lucky regarding your machine, you can even add the Kafdrop utility to visualize and troubleshoot Kafka messages with this command:\ndocker compose -f docker-compose.yml -f docker-compose-async-addon.yml -f kafdrop-addon.yml up -d Development mode A development oriented mode, without the Keycloak service is also available thanks to:\ndocker compose -f docker-compose-devmode.yml up -d This configuration enabled Asynchronous API features in a very lightweight mode using Red Panda broker instead of full-blown Apache Kafka distribution.\nWrap-up You just installed Microcks on your local machine using Docker Compose and terminal commands. Congrats! 🎉\nYou have discover that Microcks provides a bunch of default profiles to use different capabilities of Microcks depending on your working situation. Advanced profiles are using local configuration files mounted from the /config directory. You can refer to the Application Configuration Reference to get the full list of configuration options.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/overview/main-concepts/","title":"Main Concepts","description":"","searchKeyword":"","content":"Before diving in, it is useful to briefly introduce or recall concepts or terminology we frequently use in the documentation.\nArtifacts In Microcks, an Artifact is an asset that holds valuable information on how your API or microservices are expected to work. It is usually represented by a file on your local machine or in a remote version control system.\nOne of Microcks\u0026rsquo;s beauties is that it uses standard specifications or standard tooling files as Artifacts, allowing you to reuse existing assets. OpenAPI, AsyncAPI specs, GraphQL, gRPC schemas, Postman collections or SoapUI projects are all valid artifacts you can feed Microcks with. Microcks will use constraints and examples from them to build its knowledge base.\nThe more Artifacts you put in Microcks, the richer its knowledge base about your APIs and their versions will be, and the more accurate the Mocks and Tests that result from this process will be!\nMocks Mocks - or simulations as we sometimes call them - are fake API or service implementations inferred from the aggregated knowledge base. In a nutshell, you feed Microcks with your Artifacts, and it immediately produces Mocks available on specific endpoints.\nYou can use these endpoints to play around with your API as if it were real. As an API owner, you can start collecting consumer feedback. As a developer, you can start developing and using this API without bothering with external dependency. You don\u0026rsquo;t even have to write code!\nMicrocks provides smart and transparent mocks. Your consumers don\u0026rsquo;t even notice they are fake! Here again: the more comprehensive Artifacts you put in Microcks, the more intelligent your mocks will be!\nTests Tests are the direct side effect benefits of the Microcks knowledge base! From all the acquired knowledge and samples, Microcks can also validate that an actual implementation of an API or service conforms to its expectations.\nIn the literature, this process is usually called contract or conformance testing and is associated with integration testing methodologies.\nFrom the different Artifacts you provided, Microcks can apply different testing strategies ranging from the infrastructure to the business level, reusing information and constraints in various Artifacts.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/apis/async-api/","title":"Microcks' AsyncAPI","description":"","searchKeyword":"","content":"As a tool focused on APIs and Events, Microcks also offers its own Events API that allows you subscribe to events produced by Microcks. Depending on your deployment topology, those events can be consumed directly via WebSockets or via a Kafka topic named microcks-services-updates.\nThe AsyncAPI Web Component below allows you to browse and discover the various API events.\nPrevious releases of the API definitions can be found in the GitHub repository.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/configuration/security-config/","title":"Security Configuration","description":"","searchKeyword":"","content":"Overview This page aims to give you a comprehensive reference on the configuration properties used within Microcks. These informations are the ideal companion of the Architecture \u0026amp; Deployment Options explanations and will be crucial for people who want to review the different security related capabilities of a deployment.\nNetwork Using proxy for egress connections You can force the main Webapp component to use a corporate proxy for egress using the application.properties file. No rpoxy is configured by default:\nnetwork.proxyHost=${PROXY_HOST:} network.proxyPort=${PROXY_PORT:} network.proxyUsername=${PROXY_USERNAME:} network.proxyPassword=${PROXY_PASSWORD:} network.nonProxyHosts=${PROXY_EXCLUDE:localhost|127.0.0.1|*.svc.cluster.local} 💡 As the Async Minion component is not expected to access remote resources, it is not expected to connect to a proxy.\nIdentity Management Since its inception, Microcks relies on a third party component named Keycloak for managing security related topics like users identification, users authentication and permissions as well as API endpoints protection. Keycloak is also used for providing Service Accounts authentication and authorization. This topic is detailed in a dedicated page.\nBasic installation of Microcks comes with its own Keycloak instance embedding the definitions of Microcks needed components into what is called a realm. Advanced installation of Microcks can reuse an existing Keycloak instance and will require your administrator to create a new dedicated realm. We provide a sample of such a realm configuration that can be imported into your instance here in Microcks realm full configuration\nBasically, Microcks components need the reference of the Keycloak instance endpoint into an environment variable called KEYCLOAK_URL.\nAuthentication User Authentication in Microcks is delegated to the configured Keycloak instance using the OpenID Connect Authorization Code Flow. The Keycloak instance can be used as the direct source of user\u0026rsquo;s Identity or can be used as a broker for one or more configured Identity Providers.\nThe default installation and realm settings comes with the internal identity provider with 3 default users: user, manager and admin that have the same microcks123 password. Up to you to configure one Identity Provider attached to the realm Microcks is using.\nThe realm Microcks is using is an installation parameter that defaults to microcks. You can adapt it to either realm you want. See Reusing an existing Keycloak section below.\nOn the client side (ie. in the browser), Microcks is using a client application called microcks-app-js that is configured to perform redirect to the public endpoint URL of the microcks app.\nOn the server side, Microcks is using a client application called microcks-app for checking and trusting JWT bearers provided by the frontend application API calls.\nThese parameters are set withing the application.properties configuration file. See and example below:\n# Keycloak configuration properties keycloak.auth-server-url=${KEYCLOAK_URL:http://localhost:8180} keycloak.realm=microcks keycloak.resource=microcks-app keycloak.bearer-only=true keycloak.ssl-required=external # Spring Security adapter configuration properties spring.security.oauth2.client.registration.keycloak.client-id=microcks-app spring.security.oauth2.client.registration.keycloak.authorization-grant-type=authorization_code # Keycloak access configuration properties sso.public-url=${KEYCLOAK_PUBLIC_URL:${keycloak.auth-server-url}} Roles and Permissions Microcks realm typically defines 3 application roles that are defined as client roles on the Keycloak side. Theses roles are attached to the microcks-app client application.\nThese roles are:\nuser: a regular authenticated user of the Microcks application. This is the default role that is automatically attached the first time a user succeed authenticating into the Microcks app, manager: a user identified as having management roles on the Microcks repository content. Managers have the permissions of adding and removing API \u0026amp; Services into the repository as well as configuring mocks operation properties admin: a user identified as having administration role on the Microcks instance. Admin have the manager persmission and are able to manage users, configure external repositories secrets or realize backup/restore operations. Whether a connected user has these roles is checked both on the client and the server sides using Keycloak adapters.\nGroups segmentation As an optional security feature, you have the ability to segment the repository management persmissions depending on a master label you have chosen for organizing your repository. See Organizing repository for introduction on master label.\nFor example, if you defined the domain label as the master with customer, finance and sales values, you\u0026rsquo;ll be able to define users with the manager role only for the APIs \u0026amp; Services that have been labeled accordingly. Sarah may be defined as a manager for domain=customer and domain=finance services, while John may be defined as the manager for domain=sales APIs \u0026amp; services.\nWhen this feature is enabled, Microcks will create as many groups in Keycloak as we have different values for this master label. These groups are organized in a hierarchy so that you\u0026rsquo;ll have groups with such names /microcks/manager/\u0026lt;label\u0026gt; those members represents the manager of the resources labeled with \u0026lt;label\u0026gt; value.\nThis feature is enabled into the features.properties configuration file with following properties:\nSub-Property Description enabled A boolean flag that turns on the feature. true or false artifact-import-allowed-roles A comma separated list of roles that you may restrict import of artifacts to. For example:\n# features.properties features.feature.repository-tenancy.enabled=true features.feature.repository-tenancy.artifact-import-allowed-roles=admin,manager,manager-any 🗒️ The manager-any is not actually a role, it\u0026rsquo;s a notation meaning \u0026ldquo;A user that belong to any management group even if not endorsing the global manager role\u0026rdquo;.\nReusing an existing Keycloak Microcks Helm Chart and Operator can be configured to reuse an already existing Keycloak instance for your organization.\nFirst, you have to prepare your Keycloak instance to host and secure future Microcks deployment. Basically you have 2 options for this:\nCreate a new realm using Keycloak documentation and choosing Microcks realm full configuration as the file to import during creation, OR Re-use an existing realm, completing its definition with Microcks realm addons configuration by simply importing this file within realm configuration. 💡 You might want to change the redirectUris in the Microcks realm configuration file to the corresponding URI of the Microcks application, by default it is pointing to localhost.\nImporting one or another of the Microcks realm configuration file will bring all the necessary clients, roles, groups and scope mappings. If you created a new realm, the Microcks configuration also brings default users you may later delete when configuring your own identity provider in Keycloak.\nThen, you actually have to deploy the Microcks instance configured for using external Keycloak. Depending whether you\u0026rsquo;ve used Helm or Operator to install Microcks, you\u0026rsquo;ll have to customize your values.yml file or the MicrocksInstall custom resource but the properties have the same names in both installation methods:\nkeycloak: install: false realm: my-own-realm url: keycloak.example.com:443 privateUrl: http://keycloak.namespace.svc.cluster.local:8080 # Recommended serviceAccount: microcks-serviceaccount serviceAccountCredentials: ab54d329-e435-41ae-a900-ec6b3fe15c54 # Change recommended The privateUrl is optional and will allow to prevent trusting requests from webapp component to Keycloak to go through a public address and network. In a Kubernetes deployment, you\u0026rsquo;ll typically put there the cluster internal Service name.\nThe serviceAccountCredentials should typically be changed as this is the default value that comes with your realm setup. For an introduction on the purpose of service accounts in Microcks, check Service Accounts.\nHandling proxies for Keycloak access Depending on your network configuration, authentication of request with Keycloak can be a bit tricky as Keycloak requires some specific load-balancer or proxy settings. Typically, you way need to configure specific address ranges for proxies if you\u0026rsquo;re not using the usual private IPv4 blocks.\nThis can be done specifying additional extraProperties into the microcks part of your configuration - either within spec.microcks path if you\u0026rsquo;re using the Operator MicrocksInstall custom resource or from direct microcks path in values.yml when using the Helm chart. The configuration below typically declare a new IP range to treat as proxy in order to propertly forward proxy headers to the application code:\nextraProperties: server: tomcat: remoteip: internal-proxies: 172.16.0.0/12 This configuration will initiaze a new application-extra.properties in the appropiate ConfigMap, allowing you to extend the application.properties with your customizations.\nOAuth2/JWT configuration OAuth2/JWT detailed configuration is hosted in the application.properties file on the main Webapp component. We\u0026rsquo;re using Spring Security OAuth2 configuration mechanism. If a privateUrl option is provided to access Keycloak, the jwk-set-uri property must also be set to use the private url to fetch the certificates from an internal network endpoint.\n# Spring Security adapter configuration properties [..] spring.security.oauth2.client.registration.keycloak.scope=openid,profile spring.security.oauth2.client.provider.keycloak.issuer-uri=${KEYCLOAK_URL}/realms/${keycloak.realm} spring.security.oauth2.client.provider.keycloak.user-name-attribute=preferred_username spring.security.oauth2.resourceserver.jwt.issuer-uri=${sso.public-url}/realms/${keycloak.realm} # Uncomment this line if using a privateUrl to connect to Keycloak. #spring.security.oauth2.resourceserver.jwt.jwk-set-uri=${KEYCLOAK_URL}/realms/${keycloak.realm}/protocol/openid-connect/certs Kafka Reusing an existing secured Kafka Microcks Helm Chart and Operator can be configured to reuse an already existing Kafka broker instance for your organization.\nAs of today, Microcks supports connecting to SASL using JAAS and Mutual TLS secured Kafka brokers. For an introduction on these, please check Authentication Methods.\nFor SASL using JAAS, you\u0026rsquo;ll have to configure additional properties for accessing cluster CA cert and depending on SASL mechanism. The truststoreSecrefRef is actually a reference to a Kubernetes Secret that should be created first and reachable from Microcks instance:\nfeatures: async: kafka: authentication: type: SASL_SSL # SASL using JAAS authentication truststoreType: PKCS12 # JKS also possible. truststoreSecretRef: secret: my-kafka-cluster-ca-cert # Name of Kubernetes secret holding cluster ca cert. storeKey: ca.p12 # Truststore ca cert entry in Secret. passwordKey: ca.password # Truststore password entry in Secret. saslMechanism: SCRAM-SHA-512 saslJaasConfig: org.apache.kafka.common.security.scram.ScramLoginModule required username=\u0026#34;scram-user\u0026#34; password=\u0026#34;tDtDCT3pYKE5\u0026#34;; For mutual TLS, you\u0026rsquo;ll have to configuration additional properties for accessing the client certificate. The keystoreSecretRef is actually a reference to a Kubernetes Secret that should be created first and reachable from Microcks instance:\nfeatures: async: kafka: authentication: type: SSL # Mutual TLS authentication truststoreType: PKCS12 # JKS also possible. truststoreSecretRef: secret: my-kafka-cluster-ca-cert # Name of Kubernetes secret holding cluster ca cert. storeKey: ca.p12 # Truststore ca cert entry in Secret. passwordKey: ca.password # Truststore password entry in Secret. keystoreType: PKCS12 # JKS also possible. keystoreSecretRef: secret: my-mtls-user # Name of Kubernetes secret holding user client cert. storeKey: user.p12 # Keystore client cert entry in Secret. passwordKey: user.password # Keystore password entry in Secret. 💡 We recommend having a in-depth look at the Helm Chart README and the Operator README to get the most up-to-date informations on detailed configuration.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/","title":"Usage","description":"Here below all the guides related to **Usage**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/avro-messaging/","title":"Kafka, Avro & Schema Registry","description":"","searchKeyword":"","content":"Overview This guide shows you how to use Microcks for mocking and testing Avro encoding on top of Apache Kafka. You\u0026rsquo;ll see how Microcks can speed-up the sharing of Avro schema to consumers using a Schema Registry and we will check how Microcks can detect drifts between expected Avro format and the one really used.\nMicrocks supports Avro as an encoding format for mocking and testing asynchronous and event-driven APIs through AsyncAPI. When it comes to serializing Avro data to a Kafka topic, you usually have 2 options :\nThe \u0026ldquo;old-fashioned one\u0026rdquo; that is about putting raw Avro binary representation of the message payload, The \u0026ldquo;modern one\u0026rdquo; that is about putting the Schema ID + the Avro binary representation of the message payload (see Schema Registry: A quick introduction). This guide presents the 2 options that we will call RAW or REGISTRY. Microcks is by default configured to manage the RAW options so that it does not require any external dependency to get you starting. If you want to stick with this option, we first step below is obviously optional.\n1. Setup Schema Registry Microcks has been successfully tested with both Confluent Schema Registry and Apicurio Registry. Both can be deployed as containerized workload on your Kubernetes cluster. Microcks does not provide any installation scripts or procedures ; please refer to projects or related products documentation.\nWhen connected to a Schema Registry, Microcks is pushing the Avro Schema to the registry at the same time it is pushing Avro encoded mock messages to the Kafka topic. That way, Event consumers may retrieve Avro Schema from the registry to be able to deserialize messages.\nIf you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] defaultAvroEncoding: REGISTRY kafka: [...] schemaRegistry: url: https://schema-registry.apps.example.com confluent: true username: microcks credentialsSource: USER_INFO The important things to notice are:\ndefaultAvroEncoding should be set to REGISTRY (this is indeed a workaround until AsyncAPI adds support for specifying the serialization details at the Binding level. See this issue for more.) schemaRegistry block should now be specified with correct url. The confluent mode allows to tell Microcks that the registry is the Confluent one OR to turn on the Confluent compatibility mode if you\u0026rsquo;re using an Apicurio Registry. username and creadentialsSource are only used if using a secured Confluent registry. If you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] defaultAvroEncoding: REGISTRY kafka: [...] schemaRegistry: url: https://schema-registry.apps.example.com confluent: true username: microcks credentialsSource: USER_INFO Actual connection to the Schema Registry will only be made once Microcks will send Avro messages to Kafka. Let see below how to use Avro encoding with AsyncAPI.\n2. Use Avro in AsyncAPI AsyncAPI allows to reference Avro schema used for serializing / deserializing messages on a Kafka topic. The flexible notation of AsyncAPI allow to do that in 3 different ways:\nUsing the embedded notation: that means that Avro schema is defined inline within the message payload property, Using remote reference: that means that schema is specified using absolute remote endpoint like $ref: 'https://schemas.example.com/user' within the message payload property, Using local reference: that means that schema is specified using relative reference like $ref: './user-signedup.avsc#/User' within the message payload property. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use Avro and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] contentType: avro/binary schemaFormat: application/vnd.apache.avro+json;version=1.9.0 payload: $ref: \u0026#39;./user-signedup.avsc#/User\u0026#39; You\u0026rsquo;ll notice that it is of importance that contentType and schemaFormat property should be defined according to the Avro format. In this GitHub repository same folder, you\u0026rsquo;ll also find the user-signedup.avsc file defining the User record type like below:\n{ \u0026#34;namespace\u0026#34;: \u0026#34;microcks.avro\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;record\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;User\u0026#34;, \u0026#34;fields\u0026#34;: [ {\u0026#34;name\u0026#34;: \u0026#34;fullName\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;}, {\u0026#34;name\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;}, {\u0026#34;name\u0026#34;: \u0026#34;age\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;int\u0026#34;} ] } As we use references, our full specification is now spanning multiple files so you\u0026rsquo;ll not be able to simply upload one file for API import into Microcks. You will have to define a full Importer Job as described here. During the import of the AsyncAPI contract file within Microcks, local references will be resolved and files downloaded and integrated within Microcks own repository. The capture below illustrates in the Contracts section that there are now two files: an AsyncAPI and an Avro schema one.\nFinally, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification. But: how to specify examples for a binary encoding such as Avro? No problem! Simply use JSON or YAML as illustrated in the fragment below, still coming from our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] examples: - laurent: payload: |- {\u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} - john: payload: fullName: John Doe email: [email protected] age: 36 3. Validate your mocks Now it\u0026rsquo;s time to validate that mock publication of Avro messages is correct.\nWith Schema Registry When using the REGISTRY encoding options with a deployed Schema Registry, things are pretty simple as you can interact with registry either from GUI or CLI. Let\u0026rsquo;s check that Microcks has correctly published the schema for our sample topic. See below the results we have with our sample:\n$ curl https://schema-registry.apps.example.com -s -k | jq . [ \u0026#34;UsersignedupAvroAPI_0.1.2_user-signedup-microcks.avro.User\u0026#34; ] $ curl https://schema-registry.apps.example.com/subjects/UsersignedupAvroAPI_0.1.2_user-signedup-microcks.avro.User/versions -s -k | jq . [ 1 ] $ curl https://schema-registry.apps.example.com/subjects/UsersignedupAvroAPI_0.1.2_user-signedup-microcks.avro.User/versions/1 -s -k | jq . { \u0026#34;subject\u0026#34;: \u0026#34;UsersignedupAvroAPI_0.1.2_user-signedup-microcks.avro.User\u0026#34;, \u0026#34;version\u0026#34;: 1, \u0026#34;id\u0026#34;: 1, \u0026#34;schema\u0026#34;: \u0026#34;{\\\u0026#34;type\\\u0026#34;:\\\u0026#34;record\\\u0026#34;,\\\u0026#34;name\\\u0026#34;:\\\u0026#34;User\\\u0026#34;,\\\u0026#34;namespace\\\u0026#34;:\\\u0026#34;microcks.avro\\\u0026#34;,\\\u0026#34;fields\\\u0026#34;:[{\\\u0026#34;name\\\u0026#34;:\\\u0026#34;fullName\\\u0026#34;,\\\u0026#34;type\\\u0026#34;:\\\u0026#34;string\\\u0026#34;},{\\\u0026#34;name\\\u0026#34;:\\\u0026#34;email\\\u0026#34;,\\\u0026#34;type\\\u0026#34;:\\\u0026#34;string\\\u0026#34;},{\\\u0026#34;name\\\u0026#34;:\\\u0026#34;age\\\u0026#34;,\\\u0026#34;type\\\u0026#34;:\\\u0026#34;int\\\u0026#34;}]}\u0026#34; } Very nice! We can also use the kafkacat CLI tool to ensure that a topic consumer will be able to deserialize messages using the schema stored into registry.\n$ kafkacat -b microcks-kafka-bootstrap-microcks.apps.example.com:9092 -t UsersignedupAvroAPI_0.1.2_user-signedup -s value=avro -r https://schema-registry.apps.example.com -o end % Auto-selecting Consumer mode (use -P or -C to override) % Reached end of topic UsersignedupAvroAPI_0.1.2_user-signedup [0] at offset 114 {\u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36} % Reached end of topic UsersignedupAvroAPI_0.1.2_user-signedup [0] at offset 116 🎉 Super!\nWithout Schema Registry Without Schema Registry, things may be more complicated as you have to develop a consuming script or application that should have the Avro Schema locally available to be able to deserialize the message content.\nFor our User signedup Avro API sample, we have such a consumer in one GitHub repository.\nFollow the following steps to retrieve it, install dependencies and check the Microcks mocks:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/kafkajs-client $ npm install $ node avro-consumer.js microcks-kafka-bootstrap-microcks.apps.example.com:9092 UsersignedupAvroAPI_0.1.2_user-signedup Connecting to microcks-kafka-bootstrap-microcks.apps.example.com:9092 on topic UsersignedupAvroAPI_0.1.2_user-signedup {\u0026#34;level\u0026#34;:\u0026#34;INFO\u0026#34;,\u0026#34;timestamp\u0026#34;:\u0026#34;2021-02-11T20:30:48.672Z\u0026#34;,\u0026#34;logger\u0026#34;:\u0026#34;kafkajs\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;[Consumer] Starting\u0026#34;,\u0026#34;groupId\u0026#34;:\u0026#34;kafkajs-client\u0026#34;} {\u0026#34;level\u0026#34;:\u0026#34;INFO\u0026#34;,\u0026#34;timestamp\u0026#34;:\u0026#34;2021-02-11T20:30:48.708Z\u0026#34;,\u0026#34;logger\u0026#34;:\u0026#34;kafkajs\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;[Runner] Consumer has joined the group\u0026#34;,\u0026#34;groupId\u0026#34;:\u0026#34;kafkajs-client\u0026#34;,\u0026#34;memberId\u0026#34;:\u0026#34;my-app-7feb2099-1701-4a8a-9eff-50aeed60d65d\u0026#34;,\u0026#34;leaderId\u0026#34;:\u0026#34;my-app-7feb2099-1701-4a8a-9eff-50aeed60d65d\u0026#34;,\u0026#34;isLeader\u0026#34;:true,\u0026#34;memberAssignment\u0026#34;:{\u0026#34;UsersignedupAvroAPI_0.1.2_user-signedup\u0026#34;:[0]},\u0026#34;groupProtocol\u0026#34;:\u0026#34;RoundRobinAssigner\u0026#34;,\u0026#34;duration\u0026#34;:36} { \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } Note: this simple avro-consumer.js script is also able to handle TLS connections to your Kafka broker. It was omitted here for sake of simplicity but you can put the name of the CRT file as the 3rd argument of the command.\n4. Run AsyncAPI tests Now the last step for being fully accustomed to Avro on Kafka support in Microcks is to perform some tests. As we will need API implementation for that it\u0026rsquo;s not as easy as writing HTTP based API implementation, we have some helpful scripts in our api-tooling GitHub repository. This scripts are made for working with the User signedup Avro API sample we used so far but feel free to adapt them for your own use.\nSo the first thing for this section, will be to retrieve the scripts and install dependencies if you have not already do that in previous section. Follow below instructions:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/kafkajs-client $ npm install With Schema Registry When using a Schema Registry with the REGISTRY encoding configured into Microcks, the following schema illustrates Microcks interactions with broker and registry. Here, we are not necessarily using the broker and registry Microcks is using for mocking but we are able to reuse any Kafka broker and any Schema Registry available within your organization - typically this will depend on the environment you want to launch tests upon.\nThat said, imagine that you want to validate messages from a QA environment with dedicated broker and registry. Start by using our utility script to produce some messages on an user-registration arbitrary topic. This script is using a local Avro schema to do the binary encoding and it is also publishing this schema into the connected QA Schema Registry:\n$ node avro-with-registry-producer.js kafka-broker-qa.apps.example.com:9092 user-registration https://schema-registry-qa.apps.example.com Connecting to kafka-broker-qa.apps.example.com:9092 on topic user-registration, using registry https://schema-registry-qa.apps.example.com {\u0026#34;level\u0026#34;:\u0026#34;ERROR\u0026#34;,\u0026#34;timestamp\u0026#34;:\u0026#34;2021-02-11T21:07:09.962Z\u0026#34;,\u0026#34;logger\u0026#34;:\u0026#34;kafkajs\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;[Connection] Response Metadata(key: 3, version: 5)\u0026#34;,\u0026#34;broker\u0026#34;:\u0026#34;kafka-broker-qa.apps.example.com:9092\u0026#34;,\u0026#34;clientId\u0026#34;:\u0026#34;my-app\u0026#34;,\u0026#34;error\u0026#34;:\u0026#34;There is no leader for this topic-partition as we are in the middle of a leadership election\u0026#34;,\u0026#34;correlationId\u0026#34;:1,\u0026#34;size\u0026#34;:108} [ { topicName: \u0026#39;user-registration\u0026#39;, partition: 0, errorCode: 0, baseOffset: \u0026#39;0\u0026#39;, logAppendTime: \u0026#39;-1\u0026#39;, logStartOffset: \u0026#39;0\u0026#39; } ] [...] Do not interrupt the execution of the script and go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: kafka://kafka-broker-qa.apps.example.com:9092/user-registration?registryUrl=https://schema-registry-qa.apps.example.com and note this new registryUrl parameter to tell Microcks where to get the Avro schema used for writing 😉, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API. Whilst Test Endpoint and Schema Registry may be secured with custom TLS certificates or username/password, we skipped this from this guide for seek of simplicity but Microcks is handling this through Secrets or additional registryUsername and registryCredentialsSource parameters.\nLaunch the test and wait for some seconds and you should get access to the test results as illustrated below:\nThis is fine and we can see that the type is avro/binary and the message content is nicely displayed using JSON but what in case of a failure? What are we able to demonstrate using Microcks validation? Next to the script lies actually two Avro schemas:\nuser-signedup.avsc is correct and matches the one that is referenced into the AsyncAPI specification, user-signedup-bad.avsc represented an evolution and does not match the one from the AsyncAPI specification. Well let see now if we tweak a little bit the avro-with-registry-producer.js script\u0026hellip; Open it in your favorite editor to put comments on lines 48 and 56 and to remove comments on lines 45 and 55. Relaunch it and relaunch a new test\u0026hellip;\n🎉 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application is using a different and incompatible schema from the one in the AsyncAPI contract, Microcks raises an error and spot the breaking change! The fullName required property was expected as stated in the AsyncAPI file but cannot be found in incoming message\u0026hellip; thus your tested application producing message is sending garbage indeed 😉\nWithout Schema Registry Now looking at the RAW encoding option and what we can deduce from tests. To simulate an existing application, we will now use the avro-producer.js script that is also using the local user-signedup.avsc Avro schema to do the binary encoding:\n$ node avro-producer.js kafka-broker-qa.apps.example.com:9092 user-registration Connecting to kafka-broker-qa.apps.example.com:9092 on topic user-registration {\u0026#34;level\u0026#34;:\u0026#34;ERROR\u0026#34;,\u0026#34;timestamp\u0026#34;:\u0026#34;2021-02-11T21:37:28.266Z\u0026#34;,\u0026#34;logger\u0026#34;:\u0026#34;kafkajs\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;[Connection] Response Metadata(key: 3, version: 5)\u0026#34;,\u0026#34;broker\u0026#34;:\u0026#34;kafka-broker-qa.apps.example.com:9092\u0026#34;,\u0026#34;clientId\u0026#34;:\u0026#34;my-app\u0026#34;,\u0026#34;error\u0026#34;:\u0026#34;There is no leader for this topic-partition as we are in the middle of a leadership election\u0026#34;,\u0026#34;correlationId\u0026#34;:1,\u0026#34;size\u0026#34;:96} [ { topicName: \u0026#39;user-registration\u0026#39;, partition: 0, errorCode: 0, baseOffset: \u0026#39;0\u0026#39;, logAppendTime: \u0026#39;-1\u0026#39;, logStartOffset: \u0026#39;0\u0026#39; } ] [...] Do not interrupt the execution of the script and go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: kafka://kafka-broker-qa.apps.example.com:9092/user-registration simply, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API. Launch the test and wait for some seconds and you should get access to the test results as illustrated below:\nYou can see here that we just have the string representation of the binary message that was sent. Using RAW encoding we cannot be sure that what we read has any sense regarding the semantic meaning of the API contract.\nIf you want to play with this idea, start making change to the Avro schema used by the producer and add more properties of different types. As the schema referenced with the AsyncAPI contract is very basic we\u0026rsquo;ll always be able to read.\nBut start removing properties and just send single bytes, you\u0026rsquo;ll see validation failure happened. In RAW mode, validation is very shallow: we cannot detect schema incompatibilities as we do not have the schema used for writing. So Microcks can just check, the binary Avro we can read with given schema and as long as you send more bytes than expected: it works 😞\nWrap-Up In this guide we have seen how Microcks can also be used to simulate Avro messages on top of Kafka. We have also checked how it can connect to Schema Registry such as the one from Confluent in order to speed-up and make reliable the process of propagating Avro schema updates to API events consumers. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected Avro schema and the one effectively used by real-life producers.\nTake care: Microcks will detect if they send garbage! 🗑\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/deployment-options/","title":"Architecture & deployment options","description":"","searchKeyword":"","content":"Introduction Microcks is a modular cloud-native application that can be deployed using many different installation methods. This documentation gives you details on internal components and exposes the different options for deploying them. It also discuss the pros and cons of those different options and the target usage they\u0026rsquo;re addressing.\nComplete Logical Architecture In its most comprehensive form, Microcks architecture is made of components which are:\nThe Microcks main web application (also called webapp) that holds the UI resources as well as API endpoints, Its associated MongoDB database for holding your data such as the repository of APIs | Services and Tests, The Microcks Postman runtime (microcks-postman-runtime) that allows the execution of Postman Collection tests and calls back Microcks for storing results, An Apache Kafka broker that holds our private topic for changes and the public topics that will be used to publish mock messages by the microcks-async-minion. The Microcks Async Minion (microcks-async-minion) is a component responsible for publishing mock messages corresponding to AsyncAPI definitions as well as testing asynchronous endpoints. It retrieves these definitions from Microcks webapp at startup and then listens to a Kafka topic for changes on these definitions, A Keycloak instance that holds the authentication mechanisms and identity provider integration. The schema below represents this full-featured architecture with relations between components and connection to outer brokers. We represented Kafka ones (X broker) as well as brokers (Y and Z) from other protocols. You\u0026rsquo;ll see that users access the main webapp either from their browser to see the console or from the CLI or any other application using the API endpoints.\n💡 For sake of simplicity we do not represent here: the PostgreSQL (or other database) that may be associated with Keycloak, nor the Zookeeper ensemble that may be associated with Kafka.\nAs the Microcks architecture is highly modular, you don\u0026rsquo;t have to deploy all these components depending on the set of features you want to use and depending on your deployment target.\nRegular vs Uber distribution While the Regular Microcks distribution is made for high-load, persistent and production-ready deployment, we provide a stripped down version named the Uber distribution.\nThe Uber distribution is designed to support Inner Loop integration or Shift-Left scenarios to embed Microcks in your development workflow, on a laptop, within your unit tests easy. This distribution provides the essential services in a single container named microcks-uber and an optional one named microcks-uber-async-minion as represented below:\nWhilst Regular distribution relies on an external MongoDB database for persistence, the Uber distribution uses a in-memory MongoDB that suits well ephemeral usages. Whilst Regular distribution relies on Kafka for scalable sync-to-async communications, Uber distribution is using WebSocket for simple one-to-one async communications.\nUber distribution makes it easy to launch Microcks using a simple docker command like below, binding the only necessary port to your local 8585:\ndocker run -p 8585:8080 -it quay.io/microcks/microcks-uber:latest-native Where you can add asynchronous services connected to 8585 if needed:\ndocker run -p 8586:8081 -e MICROCKS_HOST_PORT=host.docker.internal:8585 -it quay.io/microcks/microcks-uber-async-minion:latest Deploying on your laptop As explained just above, the easiest way to deploy and use most of Microcks features is to simply run the Uber distribution containers. Altough, depending on how you plan to use it into your workflow, it may be more convenient to go with other deployment methods.\n1. Using Testcontainers Testcontainers is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container. It allows you to define your test dependencies as code, then simply run your tests and containers will be created and then deleted.\nMicrocks provides an official module that you can find on Testcontainers Microcks page. Starting Microcks within your unit tests can then be as simple as those 2 Java code lines:\nvar microcks = new MicrocksContainer(DockerImageName.parse(\u0026#34;quay.io/microcks/microcks-uber:latest\u0026#34;)); microcks.start(); Please check our Developing with Testcontainers guide to get access to comprehensive documentation regarding supported languages, configuration and demo applications.\n2. Using Docker Desktop Extension This way of installing Microcks is very convenient for people wanted to start quickly with most common Microcks capabilities and without hitting the terminal 👻\nThe settings panel allows you to configure some options like whether you\u0026rsquo;d like to enable the Asynchronous APIs features so that you can reconfigure ezxtension from a very simple architecture (on the left below) to something more complete (on the right below).\nWhilst this way of deploying Microcks is very convenient, the number of available configuration options is restricted and you may want to look at next options for the best flexibility.\n3. Using Docker or Podman Compose Microcks can also be deployed using Docker or Podman Compose as explained in our Docker Compose installation guide.\nWe provide provide a bunch of default profiles to use different capabilities of Microcks depending on your working situation. Advanced profiles are using local configuration files mounted from the /config directory. You can refer to the Application Configuration Reference to get the full list of configuration options so that you can virtually enable any Microcks feature.\n💡 Using Docker Compose is the option that gives you the more flexibility when deploying and using Microcks on your laptop.\nDeploying on Kubernetes 1. Everything managed When starting from scratch - the simplest way of deploying Microcks is to use our Helm Chart or Operator that will handle the setup of all required dependencies for you. All the components from the architecture are setup through community container images or operators like the excellent Strimzi Operator.\nThis setup makes things easy to start and easy to drop: everything is placed under a single Kubernetes namespace as illustrated into the schema below:\n🚨 Whilst this approach is super convenient for discovery purpose, we don\u0026rsquo;t recommend it if you wan\u0026rsquo;t to deploy a rock solid production environment. Keycloak and MongoDB components are single instances that are not tuned for being scalable nor following the security best practices. We advise relying on their own Charts or Operators to deploy those components.\n2. Partially managed Besides this all-in-one approach, you may also use both installation methods to pick the components you want Microcks to install and the other existing ones you may want Microcks to connect to. You will have the following options:\nDo not deploy a MongoDB database and reuse an existing one. For that, put the mongodb.install flag to false and specify a mongodb.url, a mongodb.database as well as credentials and that\u0026rsquo;s it! Do not deploy a Keycloak instance and reuse an existing one. For that, put the keycloak.install flag to false and specify a keycloak.url and a keycloak.realm and that\u0026rsquo;s it! Optionally, you may want to specify a keycloak.privateUrl so that security token trusting will be done without hopping through a publicly reachable URL. Do not deploy a Kafka instance and reuse an existing one. For that, put the kafka.install flag to false and specify a kafka.url and that\u0026rsquo;s it! 💡 Reusing already deployed components may allow you to lower operational costs if you\u0026rsquo;re using shared instances. It can also allow you to use managed services that may be provided by your favorite cloud vendor.\nPlease check additional reference content for configuration details:\nSecurity Configuration reference \u0026gt; Reusing Keycloak section Security Configuration reference \u0026gt; Kafka section "},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/getting-started-tests/","title":"Getting started with Tests","description":"","searchKeyword":"","content":"Quickstart (continue) with Tests Now that you have finished the Getting started guide, you should have a Microcks installation up-and-running and filled with some samples from the Microcks repository. The goal of this page is to show you how you can use Microcks to achieve Contract Testing for your API, either manually from the UI or in an automated way using the Microcks CLI tooling.\nIf you have not done it in the previous step, you will need to load one of Microcks samples: the Pastry API - 2.0. For that, refer to the previous Getting started.\nYou\u0026rsquo;ll see that this sample contains a number of different features. It will illustrate:\nSimple GET operation mocking and testing, Path parameters matching and testing, Content negotiation matching and testing. Now that we have the sample API registered in Microcks, we can deploy an implementation of this API contract. This will be our System Under Test.\nDeploying the API implementation We provide a basic implementation of the API Pastry - 2.0 in version 2.0.0 API and you may find the source code of it within this GitHub repository. The component is available as the following container image: quay.io/microcks/quarkus-api-pastry:latest.\nBefore launching some contract-tests on this implementation, you\u0026rsquo;ll need to run it locally still via Docker or Podman.\nOpen a new terminal window and run this command to locally launch the implementation:\n$ docker run -i --rm -p 8282:8282 quay.io/microcks/quarkus-api-pastry:latest WARNING: The requested image\u0026#39;s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,\u0026lt; / /_/ /\\ \\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-05-15 15:48:46,996 INFO [io.quarkus] (main) quarkus-api-pastry 1.0.0-SNAPSHOT native (powered by Quarkus 1.7.1.Final) started in 0.421s. Listening on: http://0.0.0.0:8282 2024-05-15 15:48:47,025 INFO [io.quarkus] (main) Profile prod activated. 2024-05-15 15:48:47,026 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jaxb, resteasy-jsonb Now you have everything ready to launch your first test with Microcks!\nLaunching a test Now that our component implementing the API is running, it\u0026rsquo;s time to launch some tests to check if it is actually compliant with the API Contract. This is what we call Contract Testing. You can launch and run tests from the UI or from the microcks-cli tool.\n💡 As our API implement is runing into a container bound on port 8282, it will be accessible at http://localhost:8282 from our machine network. However, from the Microcks container perspective it will be accessible using the http://host.docker.internal:8282 alias that allow accessing the machine network from inside a container.\nFrom the UI You may already have seen it but there\u0026rsquo;s a NEW TEST\u0026hellip; button on the right hand side of the page detailing the API Pastry service. Hitting it leads you to the following form where you will be able to specify a target URL for the test, as well as a Runner — a testing strategy for your new launch:\nJust copy/paste the endpoint URL where your quarkus-api-pastry implementation can be reached here. Then select the OPEN_API_SCHEMA test strategy, and finally, hit the Launch test ! button. This lead you to the following screen where you will wait for tests to run and finalize (green check marks should appear after some seconds).\nFollowing the Full results link in the above screen will lead you to a screen where you\u0026rsquo;ll have access to all the test details and request/responses content exchanged with the endpoint during the tests. Very handy for troubleshooting or comparing results on different environments!\nFrom the CLI Microcks also provides the microcks-cli tool that can be used to automate the testing. Binary releases for Linux, MacOS or Windows platform are available on the GitHub releases page.\nYou can downlaod the binary or just use the corresponding container image for a quick ride! Specify the test command followed by the API/Service name and version, the test endpoint URL, the runner as well as some connection credentials and it will launch the test for you:\n$ docker run -it quay.io/microcks/microcks-cli:latest microcks-cli test \\ \u0026#39;API Pastry - 2.0:2.0.0\u0026#39; http://host.docker.internal:8282 OPEN_API_SCHEMA \\ --microcksURL=http://host.docker.internal:8585/api/ \\ --keycloakClientId=foo --keycloakClientSecret=bar \\ --insecure --waitFor=6sec MicrocksClient got status for test \u0026#34;6644db75269ded17868d654c\u0026#34; - success: true, inProgress: true MicrocksTester waiting for 2 seconds before checking again or exiting. Full TestResult details are available here: http://host.docker.internal:8585/#/tests/6644db75269ded17868d654c The above URL will give you accees to the detailed report as we did via the UI.\nYou\u0026rsquo;ve just finished the Getting Started tour. Ta Dam! 🎉 You can now stop the Microcks and API implementation running containers in your terminal.\nWhat\u0026rsquo;s next? Now that you have basic information on how to use Microcks for mocking and testing, you can go further with:\nWriting your own artifacts files and creating: your first OpenAPI mock, your first GraphQL mock, your first gRPC mock, or your first AsyncAPI mock with Kafka. "},{"section":"Documentation","url":"https://microcks.io/documentation/references/templates/","title":"Mock Templates","description":"","searchKeyword":"","content":"Introduction This page contains the comprehensive lists of variables and functions that can be used to genete dynamic mock content in Microcks.\nThere is 3 different kinds of expressions that can be used to generate dynamic content in mocks when included into the {{ }} expression marker:\nVariables Reference Expressions allow reusing elements from an incoming request, accessed from variables, Context Expressions allow reusing elements from the request processing context, Function Expressions allow generating dynamic data using helper functions. Variable Reference Expressions Simple, array and map The request object is a simple bean of class EvaluableRequest that contains 4 properties of different types. Properties can be simply evaluated using the . notation to navigate to their value:\nbody is a string property representing request payload, path is a string array property representing the sequence of path elements in URI, params is a map of string:string representing the request query parameters, headers is a map of string:string representing the request headers. Now let\u0026rsquo;s imagine the following request coming onto a Microcks endpoint for Hello API version 1.0:\n$ curl http://microcks.example.com/rest/Hello+API/1.0/hello/microcks?locale=US -H \u0026#39;trace: azertyuiop\u0026#39; -d \u0026#39;rocks\u0026#39; Here\u0026rsquo;s how the different expressions will be evaluated:\nExpression Evaluation Result request.body rocks request.path[1] microcks request.params[locale] US request.headers[trace] azertyuiop JSON body Pointer expression In the case where request payload body can be interpreted as JSON, Microcks has also the capability of defining template expressions that will analyse this structured content and pick some elements for rendering.\nImagine our API deal with library and may receive this kind of request body payload:\n{ \u0026#34;library\u0026#34;: \u0026#34;My Personal Library\u0026#34;, \u0026#34;books\u0026#34;: [ { \u0026#34;title\u0026#34;:\u0026#34;Title 1\u0026#34;, \u0026#34;author\u0026#34;:\u0026#34;Jane Doe\u0026#34; }, { \u0026#34;title\u0026#34;:\u0026#34;Title 2\u0026#34;, \u0026#34;author\u0026#34;:\u0026#34;John Doe\u0026#34; } ] } Using Microcks we can just append a JSON Pointer expression to request.body element in order to ask for a deeper parsing and evaluation. The JSON Pointer part should be expressed just after a starting / indicating we\u0026rsquo;re navigating into a sub-query.\nPointer can reference text value node as well as arrays and objects. The node\u0026rsquo;s contents will be rendered as JSON string if complex objects or arrays are referenced. Please be aware that the whitespacing might differ from the request in this case.\nHere\u0026rsquo;s a bunch of examples on previous library case and how they\u0026rsquo;ll be rendered:\nExpression Evaluation Result Comment request.body/library My Personal Library request.body/books/1/author John Doe JSON Pointer array index starting at 0 request.body/books/1 {\u0026quot;title\u0026quot;:\u0026quot;Title 1\u0026quot;,\u0026quot;author\u0026quot;:\u0026quot;Jane Doe\u0026quot;} JSON Pointer to object returning JSON seralized string of contents request.body/books [{\u0026quot;title\u0026quot;:\u0026quot;Title 1\u0026quot;,\u0026quot;author\u0026quot;:\u0026quot;Jane Doe\u0026quot;},{ \u0026quot;title\u0026quot;:\u0026quot;Title 2\u0026quot;,\u0026quot;author\u0026quot;:\u0026quot;John Doe\u0026quot;}] JSON Pointer to array returning JSON seralized string of contents XML body XPath expression In the case where request payload body can be interpreted as XML, Microcks has also the capability of defining template expressions too!\nImagine our API deal with library and may receive this kind of request body payload:\n\u0026lt;library\u0026gt; \u0026lt;name\u0026gt;My Personal Library\u0026lt;/name\u0026gt; \u0026lt;books\u0026gt; \u0026lt;book\u0026gt;\u0026lt;title\u0026gt;Title 1\u0026lt;/title\u0026gt;\u0026lt;author\u0026gt;Jane Doe\u0026lt;/author\u0026gt;\u0026lt;/book\u0026gt; \u0026lt;book\u0026gt;\u0026lt;title\u0026gt;Title 2\u0026lt;/title\u0026gt;\u0026lt;author\u0026gt;John Doe\u0026lt;/author\u0026gt;\u0026lt;/book\u0026gt; \u0026lt;/books\u0026gt; \u0026lt;/library\u0026gt; Analogous to JSON payload, we can just append a XPath expression to request.body element to ask for deeper parsing and evaluation. Here\u0026rsquo;s a bunch of examples on previous library case and how they\u0026rsquo;ll be rendered:\nExpression Evaluation Result Comment request.body/library/name My Personal Library request.body//name My Personal Library Use wilcard form. // means \u0026ldquo;any path\u0026rdquo; request.body/library/books/book[1]/author Jane Doe Take care of XPath array index starting at 1 ;-) In the case you\u0026rsquo;re dealing with namespaced XML or SOAP request, Microcks does not support namespaced for now but the relaxed local-name() XPath expression allowed you to workaround this limitation. If we get a namespaced version of our XML payload:\n\u0026lt;ns:library xmlns:ns=\u0026#34;https://microcks.io\u0026#34;\u0026gt; \u0026lt;ns:name\u0026gt;My Personal Library\u0026lt;/ns:name\u0026gt; \u0026lt;ns:books\u0026gt; \u0026lt;ns:book\u0026gt;\u0026lt;ns:title\u0026gt;Title 1\u0026lt;/ns:title\u0026gt;\u0026lt;ns:author\u0026gt;Jane Doe\u0026lt;/ns:author\u0026gt;\u0026lt;/ns:book\u0026gt; \u0026lt;ns:book\u0026gt;\u0026lt;ns:title\u0026gt;Title 2\u0026lt;/ns:title\u0026gt;\u0026lt;ns:author\u0026gt;John Doe\u0026lt;/ns:author\u0026gt;\u0026lt;/ns:book\u0026gt; \u0026lt;/ns:books\u0026gt; \u0026lt;/ns:library\u0026gt; We can adapt the XPath expression to ignore namespaces prefix:\nExpression Evaluation Result Comment request.body//*[local-name() = 'name'] My Personal Library Ignore namespaces and use local tag names Fallback When dealing with optional content from incoming request, it can be useful to have some fallback in case of missing content. For that purpose, you can use the || notation to express a fallback expression. In the example below, either the incoming request prefix is used, or we generate a random one using a function in the case it\u0026rsquo;s null or empty.\n{ \u0026#34;prefix\u0026#34;: \u0026#34;{{ request.body/prefix || randomNamePrefix() }}\u0026#34;, \u0026#34;fullname\u0026#34;: \u0026#34;{{ request.body/firstname request.body/lastname }}\u0026#34; } Context Expression Aside the request object that is automatically injected, you have access to mock-request wide context. You can inject custom variables into this context using the SCRIPT dispatcher through the requestContext object (see this documentation) or by using the put(myVariable) function with redirect expression as detailed below.\nVariables from context can be simply used in templates using their name within the template mustaches markers like this {{ myVariable }}\nFunction Expressions Function expressions allows generation of dynamic content. They are different from varaible reference as they include the () notation to provide arguments. Microcks also support the notation compatibility with Postman Dynamic variables. So that you can reuse your existing response expressed within Postman Collection. The only limitation being that Postman dynamic variables cannot handle arguments passing so functions will always be invoked without arguments.\nSo basically, a function expression can be materialized with the Microcks notation function(arg1, arg2) OR the Postman notation $function.\nCommon functions Put in context The put() function allows to store result into the mock-request wide context using a variable name. Result is acquired from a \u0026gt; redirect expression as the previous function invocation result. It has a mandatory argument that is the variable name used for storing into context.\nuuid() \u0026gt; put(myId) // 3a721b7f-7dc9-4c45-9777-516942b98e0d WITH this id stored in myId variable. // Can be reused later in template using {{ myId }}. Date generator The now() function allows to generate current date. It can also be invoked using the timestamp() alias.\nInvoked with no argument, it\u0026rsquo;s a simple milliseconds timestamp since EPOCH beginning. This function can also be invoked with one argument being the pattern to use for rendering current\u0026rsquo;s date as string. The Java date and time patterns are use as referenced.\nIt can also be called with a second argument representing an amount of time to add to current date before rendering string representation. It does not support composite ammount for the moment. Think of it as a commodity for generating expiry or validity dates 😉 Here are some examples below:\nnow() // 1581425292309 now(dd/MM/yyyy HH:mm:ss) // 11/02/2020 13:48:12 now(dd/MM/yyyy, 1M) // 11/03/2020 $now // 1581425292309 $timestamp // 1581425292309 UUID generator The uuid() function allows to simply generate a UUID compliant with RFC 4122 (see https://www.cryptosys.net/pki/uuid-rfc4122.html). It can also be invoked using the guid() or randomUUID().\nuuid() // 3F897E85-62CE-4B2C-A957-FCF0CCE649FD guid() // 3a721b7f-7dc9-4c45-9777-516942b98e0d $randomUUID // 6929bb52-3ab2-448a-9796-d6480ecad36b Random Integer generator The randomInt() function allows to generate a random integer.\nWhen called with no argument, the value span between -65635 and 65635. You can specify an argument to force the generation of a positive integer that is less or equals this argument.\nFinally, it can be invoked with a second argument thus defining a range for the integer to be generated. Here\u0026rsquo;s some examples below:\nrandomInt() // -5239 randomInt(32) // 27 randomInt(25, 50) // 43 Random String generator The randomString() function simply generates a random alphanumeric string. The default length when called by no argument is 32 charracters. One can specify a integer argument to force string length to desired lentgh. Here\u0026rsquo;s some examples below:\nrandomString() // kYM8nSjEdLfgKOGG1dfacro2IUmuuan randomString(64) // VclBAQiNAybe0B5IrXjGqOChQNDFdoTguf5jWn2tqRNfptWSYFy7yxdpxoNIGOpC Random Value generator The randomValue() function simply generates a random string among provided values specified as arguments. Here\u0026rsquo;s some examples below:\nrandomValue(foo, bar) // foo OR bar randomValue(apple, orange, grape, pear) // apple, orange, grape OR pear Random Boolean generator The randomBoolean() function simply generates a random boolean. Here\u0026rsquo;s some examples below:\nrandomBoolean() // true $randomBoolean // false Names related functions The names related functions are using the Datafaker library to generate fake data from a library of common names and related.\nFirst name generator The randomFirstName() function allows to generate random person first name.\nrandomFirstName() // Samantha $randomFirstName // Chandler Last name generator The randomLastName() function allows to generate random person last name.\nrandomLastName() // Schneider $randomLastName // Williams Full name generator The randomFullName() function allows to generate random person full name.\nrandomFullName() // Sylvan Fay $randomFullName // Jonathon Kunze Name prefix generator The randomNamePrefix() function allows to generate random person name prefix.\nrandomNamePrefix() // Ms. $randomNamePrefix // Dr. Name suffix generator The randomNameSuffix() function allows to generate random person name prefix.\nrandomNameSuffix() // MD $randomNameSuffix // DDS Phone, Address and Location related functions The address related functions are using the Datafaker library to generate fake data from a library of common address and related.\nPhone number generator The randomPhoneNumber() function allows to generate random 10-digit phone numbers.\nrandomPhoneNumber() // 494-261-3424 $randomPhoneNumber // 662-302-7817 City generator The randomCity() function allows to generate random city name.\nrandomCity() // Paris $randomCity // Boston Street Name generator The randomStreetName() function allows to generate random street name.\nrandomStreetName() // General Street $randomStreetName // Kendrick Springs Street Address generator The randomStreetAddress() function allows to generate random street address.\nrandomStreetAddress() // 5742 Harvey Streets $randomStreetAddress // 47906 Wilmer Orchard Country generator The randomCountry() function allows to generate random country name.\nrandomCountry() // Kazakhstan $randomCountry // Austria Country code generator The randomCountryCode() function allows to generate random 2-letter country code (ISO 3166-1 alpha-2).\nrandomCountryCode() // CV $randomCountryCode // MD Latitude generator The randomLatitude() function allows to generate random latitude coordinate.\nrandomLatitude() // 27.3644 $randomLatitude // 55.2099 Longitude generator The randomLongitude() function allows to generate random longitude coordinate.\nrandomLongitude() // 40.6609 $randomLongitude // 171.7139 Domains, Emails and Usernames related functions The address related functions are using the Datafaker library to generate fake data from a library of common domains, emails and related.\nEmail generator The randomEmail() function allows to generate random email address.\nrandomEmail() // [email protected] $randomEmail // [email protected] "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/mocks-constraints/","title":"Applying constraints to mocks","description":"","searchKeyword":"","content":"Overview Sometimes it may be required to specify additional constraints onto a mock operation. Constraints that are related to API behaviour or semantic may be hard even impossible to express with an API contract. Microcks allows you to specify such constraints by editing the properties of a Service or API operation.\nThis guide will introduce you the concepts of Microcks parameters constraints that allows you to customize the behavior and the validation of your mocks. You\u0026rsquo;ll kearn through a simple example how to place constraints onto a REST API operation.\n1. Concepts In Microcks, constraints can be put onto Query or Header parameters and are of 3 types:\nrequired constraints force the presence of parameter in incoming request, recopy constraints just send back the same parameter name and value into mock response, match constraints check the value of a parameter against a specified regular expression. 2. Practice To practice the setup of constraints, you can reuse the Pastry API sample that ois described into our Getting Started tutorial. Now imagine you put such constraints onto the GET /pastry operation of your REST API that is secured using a JWT Bearer and should managed tracabelity using a correlation id:\nNow let\u0026rsquo;s do some tests to check Microcks behavior:\n$ http http://localhost:8080/rest/API+Pastry/1.0.0/pastry --- OUTPUT --- HTTP/1.1 400 Connection: close Content-Length: 65 Content-Type: text/plain;charset=UTF-8 Date: Fri, 13 Dec 2019 19:20:31 GMT X-Application-Context: application Parameter Authorization is required. Check parameter constraints. Hum\u0026hellip; Adding the Authorization header\u0026hellip;\n$ http http://localhost:8080/rest/API+Pastry/1.0.0/pastry Authorization:\u0026#39;Bearer 123\u0026#39; --- OUTPUT --- HTTP/1.1 400 Connection: close Content-Length: 89 Content-Type: text/plain;charset=UTF-8 Date: Fri, 13 Dec 2019 19:31:01 GMT X-Application-Context: application Parameter Authorization should match ^Bearer\\s[a-f0-9]{36}$. Check parameter constraints. Hum\u0026hellip; Fixing the Bearer format and adding the x-request-id header:\n$ http http://localhost:8080/rest/API+Pastry/1.0.0/pastry Authorization:\u0026#39;Bearer abcdefabcdefabcdefabcdefab1234567890\u0026#39; x-request-id:123 --- OUTPUT --- HTTP/1.1 200 Content-Length: 559 Content-Type: application/json Date: Fri, 13 Dec 2019 19:33:52 GMT X-Application-Context: application x-request-id: 123 [ { \u0026#34;description\u0026#34;: \u0026#34;Delicieux Baba au Rhum pas calorique du tout\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Baba Rhum\u0026#34;, \u0026#34;price\u0026#34;: 3.2, \u0026#34;size\u0026#34;: \u0026#34;L\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;available\u0026#34; }, { \u0026#34;description\u0026#34;: \u0026#34;Delicieux Divorces pas calorique du tout\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Divorces\u0026#34;, \u0026#34;price\u0026#34;: 2.8, \u0026#34;size\u0026#34;: \u0026#34;M\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;available\u0026#34; }, { \u0026#34;description\u0026#34;: \u0026#34;Delicieuse Tartelette aux Fraises fraiches\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Tartelette Fraise\u0026#34;, \u0026#34;price\u0026#34;: 2, \u0026#34;size\u0026#34;: \u0026#34;S\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;available\u0026#34; } ] Yeah! That\u0026rsquo;s it 🎉 You sucessfully conifgured parmaeters contraints on the GET /pastry operation!\nWrap-up Constraints are an easy to use and powerful for specifying additonal behavior or validation rules for your mocks. Defining constraints place your consumers in a better position for a seamless transition to real-life implementation of your API once it is ready.\nIt\u0026rsquo;s worth noting that Operation parameter constraints are saved into Microcks database and not replaced by a new import of your Service or API definition. They can be independently set and updated using the Microcks REST API.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/swagger-conventions/","title":"Swagger Conventions","description":"","searchKeyword":"","content":"Microcks is supporting Swagger mocking and testing thanks to multi-artifacts support feature. In order to use Swagger in Microcks, you will need 2 artifacts for each API definition:\nA Swagger definition that holds the API metadata and operations definitions, A Postman Collection file that holds the mock examples (requests and responses) for the different operations of the API. Conventions In order to be correctly imported and understood by Microcks, your Postman file should follow a little set of reasonable conventions and best practices.\nYour Postman collection description will need to have a name that matches the API name and a custom property version that matches the API version. As of writing, Postman does not allow editing of such custom property although the Collection v2 format allow them. By convention, we allow setting it through the collection description using this syntax: version=1.0 - Here is now the full description of my collection.... Your Postman collection will need to organize examples into requests having the same url as the Swagger paths and verbs. The comparison is realized apart the path templating characters. Eg. in Swagger you may have a GET /path/{param} operation, the Postman request with GET verb and /path/:param url will be considered as equivalent. Your Postman collection will then simply hold examples, defining the value for all different fields of a requeest/response pair. We recommend having a look at our sample Swagger API for the Beer Catalog API as well as the companion Postman collection to fully understand and see those conventions in action.\nIllustration Let\u0026rsquo;s dive in details of our sample Beer Catalog API.\nSpecifying API structure This is a fairly basic Swagger API. You can see below an excerpt of the the definition using Swagger below found in BeerCatalogAPI-swagger.json file and defining 3 operations:\n{ \u0026#34;swagger\u0026#34;: \u0026#34;2.0\u0026#34;, \u0026#34;info\u0026#34;: { \u0026#34;title\u0026#34;: \u0026#34;Beer Catalog API\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0.99\u0026#34; }, \u0026#34;paths\u0026#34;: { \u0026#34;/beer\u0026#34;: { \u0026#34;get\u0026#34;: { } }, \u0026#34;/beer/{name}\u0026#34;: { \u0026#34;get\u0026#34;: { } }, \u0026#34;/beer/findByStatus/{status}\u0026#34;: { \u0026#34;get\u0026#34;: { } } } } Considering the first comment line of this file, when imported into Microcks, it will discover the Beer Catalog API with version 0.99 and 3 operations that are: GET /beer, GET /beer/{name} and GET /beer/findByStatus/{status}.\nSpecifying API examples Using Postman, just create a new Collection - using the same name as Swagger API and adding the custom property version at the beginning of description like illustrated below:\nYou can now start organizing and creating requests those verb and url are matching with the Swagger API operation path and verbs. For our example, we\u0026rsquo;re specifying the three operations: GET /beer, GET /beer/{name} and GET /beer/findByStatus/{status}.\n💡 Note in the example above that Microcks doesn\u0026rsquo;t care on Postman request name but checks the verb and the url. Here we define request and attached examples for the Swagger GET /beer/{name} operation. The checked parts are the verb (GET here) and the url (/beer/:name is equivalent to /beer/{name} path).\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/","title":"How-To Guides","description":"Here below all the documentation pages related to **Guides**.","searchKeyword":"","content":"Teaches Microcks\u0026rsquo; capabilities at a high level Welcome to Microcks Guides! Our Guides section teaches Microcks capabilities and features at a high level. It should help you getting something done, correctly and safely.\n💡 Remember Contribute to Microcks Guides\nCode isn\u0026rsquo;t the only way to contribute to OSS; Dev Docs are a huge help that benefit the entire OSS ecosystem. At Microcks, we value Doc contributions as much as every other type of contribution. ❤️\nTo get started as a Docs contributor:\nFamiliarize yourself with our project\u0026rsquo;s Contribution Guide and our Code of Conduct Head over to our Microcks Docs Board Pick an issue you would like to contribute to and leave a comment introducing yourself. This is also the perfect place to leave any questions you may have on how to get started If there is no work done in that Docs issue yet, feel free to open a PR and get started! Docs contributor questions\nDo you have a documentation contributor question and you\u0026rsquo;re wondering how to tag me into a GitHub discussion or PR? Have no fear!\nJoin us on Discord and use the #documentation channel to ping us!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/integration/backstage-plugin/","title":"Configuring the Backstage Plugin","description":"","searchKeyword":"","content":" 🪄 To Be Created\nThis is a new documentation page that has to be written as part of our Refactoring Effort.\nGoal of this page\n\u0026hellip; "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/podman-compose/","title":"With Podman Compose","description":"","searchKeyword":"","content":"This guide shows you how to install and run Microcks using Docker Compose.\nPodman Compose is a tool for easily testing and running multi-container applications. Microcks offers a simple way to set up the minimal required containers to have a functional environment on your local computer. This procedure has been successfully tested with Podman 2.1.1 onto Fedora 33+ and should be OK on CentOS Stream 8+ and RHEL 8+ distributions too.\nTo get started, make sure you first have the Podman and the Podman Compose packages installed on your system.\nThen, in your terminal issue the following commands:\nClone this repository. git clone https://github.com/microcks/microcks.git --depth 10 Change to the install folder cd microcks/install/podman-compose Spin up the containers in rootless mode using our utility script: $ ./run-microcks.sh On macos, need to get the userid and groupid from postman machine. Assuming this machine is named \u0026#39;podman-machine-default\u0026#39;. Change name in script otherwise. Starting Microcks using podman-compose ... ------------------------------------------ Stop it with: podman-compose -f microcks.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; stop Re-launch it with: podman-compose -f microcks.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; start Clean everything with: podman-compose -f microcks.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; down ------------------------------------------ Go to https://localhost:8080 - first login with admin/microcks123 Having issues? Check you have changed microcks.yml to your platform podman-compose -f microcks.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; up -d This will start the required containers and setup a simple environment for your usage.\nOpen a new browser tab and point to the http://localhost:8080 endpoint. This will redirect you to the Keycloak Single Sign On page for login. Use the following default credentials to login into the application:\nUsername: admin Password: microcks123 You will be redirected to the main dashboard page.\nEnabling Asynchronous API features Support for Asynchronous API features of Microcks are not enabled by default. If you feel your local machine has enough resources to afford it, you can enable them using a slightly different command line.\nIn your terminal use the following command instead:\n$ ./run-microcks.sh async On macos, need to get the userid and groupid from postman machine. Assuming this machine is named \u0026#39;podman-machine-default\u0026#39;. Change name in script otherwise. Starting Microcks using podman-compose ... ------------------------------------------ Stop it with: podman-compose -f microcks.yml -f microcks-template-async-addon.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; stop Re-launch it with: podman-compose -f microcks.yml -f microcks-template-async-addon.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; start Clean everything with: podman-compose -f microcks.yml -f microcks-template-async-addon.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; down ------------------------------------------ Go to https://localhost:8080 - first login with admin/microcks123 Having issues? Check you have changed microcks.yml to your platform podman-compose -f microcks.yml -f microcks-template-async-addon.yml --podman-run-args=\u0026#39;--userns=keep-id:uid=501,gid=1000\u0026#39; up -d Podman compose is now launching additional containers, namely zookeeper, kafka and the microcks-async-minion.\nYou may want to check our blog post for a detailed walkthrough on starting Async features on docker-compose (Podman compose is very similar).\nDevelopment mode A development oriented mode, without the Keycloak service is also available thanks to:\n$ ./run-microcks.sh dev This configuration enabled Asynchronous API features in a very lightweight mode using Red Panda broker instead of full-blown Apache Kafka distribution.\nWrap-up You just installed Microcks on your local machine using Podman Compose and terminal commands. Congrats! 🎉\nYou have discover that Microcks provides a bunch of default profiles to use different capabilities of Microcks depending on your working situation. Advanced profiles are using local configuration files mounted from the /config directory. You can refer to the Application Configuration Reference to get the full list of configuration options.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/first-rest-mock/","title":"Your 1st REST mock","description":"","searchKeyword":"","content":"Overview This tutorial is a step-by-step walkthrough on how to use OpenAPI v3 Specification to specify your mocks for your API. This is hands-on introduction to OpenAPI Conventions reference that brings all details on conventions being used.\nWe will go through a practical example based on the famous PetStore API. We\u0026rsquo;ll build the reference petstore-1.0.0-openapi.yaml file by iterations, highlighting the details to get you starting with mocking OpenAPI on Microcks.\nLet\u0026rsquo;s start! 💥\n1. Setup Microcks and OpenAPI skeleton First mandatory step is obviously to setup Microcks 😉. For OpenAPI usage, we don\u0026rsquo;t need any particular setup and the simple docker way of deploying Microcks as exposed in Getting started is perfectly suited. Following the getting started, you should have a Microcks running instance on http://localhost:8585.\nThis could be on another port if 8585 is already used on your machine.\nNow let start with the skeleton of our OpenAPI contract for the PetStore API. We\u0026rsquo;ll start with general information on this API and with definition of two different datatypes:\nNewPet is the data structure that will be used to register a new pet in our store - it just mandates a name attribute, Pet is an extension of this structure for already registered pets. Once registered a pet has an additional id attribute. This is over-simplistic but enough to help demonstrate how to do things. Here\u0026rsquo;s the YAML representing this part of the OpenAPI contract:\nopenapi: 3.0.2 info: title: Petstore API version: 1.0.0 description: |- A sample API that uses a petstore as an example to demonstrate features in the OpenAPI 3.0 specification and Microcks contact: name: Microcks Team url: \u0026#39;https://microcks.io\u0026#39; license: name: Apache 2.0 url: \u0026#39;https://www.apache.org/licenses/LICENSE-2.0.html\u0026#39; components: schemas: Pet: allOf: - $ref: \u0026#39;#/components/schemas/NewPet\u0026#39; - properties: id: format: int64 type: integer required: - id NewPet: properties: name: type: string required: - name 2. Basic operation in OpenAPI Let\u0026rsquo;s now define a first operation to this API. We want give a user the ability to consult her list of favorite pets in the store. Hence, we\u0026rsquo;ll define a /my/pets path in our API with a GET operation. This operation will just return an array of Pet objects.\nWe\u0026rsquo;re going to add OpenAPI Example Objects. As this operation does not expect anything as input but just produces a result, we\u0026rsquo;ll add an example called my_pets in the response content. Just paste the content below at the end of above skeleton:\npaths: /my/pets: get: responses: \u0026#34;200\u0026#34;: content: application/json: schema: type: array items: $ref: \u0026#39;#/components/schemas/Pet\u0026#39; examples: my_pets: value: - id: 1 name: Zaza - id: 2 name: Tigress - id: 3 name: Maki - id: 4 name: Toufik Because of the application/json content type, we can express examples as JSON or as YAML objects. Examples are really helpful when carefully chosen to represent real-life samples very close to actual functional situation. Here I\u0026rsquo;ve put my real cats 🐈 names.\nAs soon as your contract contains examples, you can import it into Microcks and it will use examples to produce real life simulation of your API. Use the Direct Upload method to inject your OpenAPI file in Microcks. You should get the following result:\n🤔 You may have noticed in the above screenshot that dispatching properties are empty for now. This is normal as we\u0026rsquo;re on a basic operation with no routing logic. We\u0026rsquo;ll talk about dispatchers in next section.\nMicrocks has found my_pets as a valid sample to build a simulation upon. A mock URL has been made available and you can use it to test the API operation as demonstrated below with a curl command:\n$ curl http://localhost:8585/rest/Petstore+API/1.0.0/my/pets -s | jq [ { \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Zaza\u0026#34; }, { \u0026#34;id\u0026#34;: 2, \u0026#34;name\u0026#34;: \u0026#34;Tigress\u0026#34; }, { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] This is your first OpenAPI mock 🎉 Nice achievement!\n3. Using query parameters in OpenAPI Let\u0026rsquo;s make things a bit more spicy by adding query parameters. Now assume we want to provide a simple searching operation to retrieve all pets in store using simple filter. We\u0026rsquo;ll end up adding a new GET operation in your API, bound to the /pets path. Of course, we\u0026rsquo;ll have to define the filter parameter that will be present in query so that users will query /pets?filter=zoe to get all the pets having zoe in name.\nSo we\u0026rsquo;ll add a new path snippet in the paths section of our OpenAPI document like below. This snippet is also integrating Example Objects for both the query parameter and the response.\npaths: [...] /pets: get: parameters: - name: filter in: query schema: type: string examples: k_pets: value: k responses: \u0026#34;200\u0026#34;: content: application/json: schema: type: array items: $ref: \u0026#39;#/components/schemas/Pet\u0026#39; examples: k_pets: value: - id: 3 name: Maki - id: 4 name: Toufik The important things to notice here is the logic behind example naming. In fact, OpenAPI specification allows to specify example fragments for each and every piece of a contract. To be tied together by Microcks, related parts must have the same key. Here the key is k_pets that allows to link filter=k with the associated response containing the 2 cats having a k in their name. When imported into Microcks, you should have following result:\nWhat about dispatching properties we mentioned earlier? You can see that they now having values. Because of the presence of parameter in your operation, Microcks has inferred a routing logic named URI_PARAMS that will be based on matching rule on filter parameter. Let\u0026rsquo;s try the mock URL with this command:\n$ curl http://localhost:8585/rest/Petstore+API/1.0.0/pets\\?filter\\=k -s | jq [ { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] 🛠️ As an exercice to validate your understanding, just add a new i_pets sample so that when user specify a filter with value i, the 3 correct cats are returned (Tigresse, Maki and Toufik)\nIn this section, we introduced the naming convention that allows tying together elements that allow to define matching request and response elements. This is the foundation mechanism for defining comprehensive examples illustrating functional expectations of your API. Depending on tied elements, Microcks is deducing a dispatching or routing logic depending on incoming request elements. Dispatcher is a powerful concept in Microcks that can be fully customized if inferred ones are not enough for your needs.\n4. Using path parameters in OpenAPI Another very common construction in OpenAPI is the usage of path parameters. Such parameters are directly integrated into API request URL path so that you have the ability to access identified resources. Typically with the Petstore API, you\u0026rsquo;d want to allow users to use directly the /pets/1 to access the cat with identifier 1.\nLet\u0026rsquo;s add such a new operation into the API by adding the following path snippet into the paths section. Once again, we\u0026rsquo;re integrating Example Objects for both the path parameter and the response.\npaths: [...] /pets/{id}: get: parameters: - name: id in: path schema: type: string examples: pet_1: value: \u0026#39;1\u0026#39; pet_2: value: \u0026#39;2\u0026#39; responses: \u0026#34;200\u0026#34;: content: application/json: schema: $ref: \u0026#39;#/components/schemas/Pet\u0026#39; examples: pet_1: value: id: 1 name: Zaza pet_2: value: id: 2 name: Tigresse You can notice in this snippet that we directly integrates two different samples using the same keys (pet_1 and pet_2) to tie together the example fragments in a coherent way. When imported into Microcks, you should have following result:\nThe Dispatcher inferred by Microcks has been adapted to URI_PARTS which means that routing logic is made of parts (or path elements) or the URI. The element that is considered for routing is the id parameter. Let\u0026rsquo;s tests these new mocks with some commands:\n$ curl http://localhost:8585/rest/Petstore+API/1.0.0/pets/1 -s | jq { \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Zaza\u0026#34; } $ curl http://localhost:8585/rest/Petstore+API/1.0.0/pets/2 -s | jq { \u0026#34;id\u0026#34;: 2, \u0026#34;name\u0026#34;: \u0026#34;Tigresse\u0026#34; } 🎉 Fantastic! We now have a mock with routing logic based on API path elements.\n💡 Microcks dispatcher can support multiple path elements to find appropriate response to an incoming request. In that case, the dispatcher rule will have the form of part_1 \u0026amp;\u0026amp; part_2 \u0026amp;\u0026amp; part_3. In that case of having multiple parts, you may find it useful to reuse definition of example values using $ref notation. This is totally supported by Microcks.\n5. Mocking a POST operation And now the final step! Let\u0026rsquo;s deal with a new operation that allows registering a new pet within the Petstore. For that, you\u0026rsquo;ll typically have to define a new POST operation on the /pets path. In order to be meaningful to the user of this operation, a mock would have to integrate some logic that reuse contents from the incoming request and/or generate sample data. That\u0026rsquo;s typically what we\u0026rsquo;re going to do in this last section 😉\nLet\u0026rsquo;s add such a new operation into the API by adding the following path snippet into the paths/pets section. The subtlety here is that we\u0026rsquo;re integrating specific elements in our Example Objects:\npaths: /pets: [...] post: requestBody: content: application/json: schema: $ref: \u0026#39;#/components/schemas/NewPet\u0026#39; examples: new_pet: value: name: Jojo responses: \u0026#34;201\u0026#34;: content: application/json: schema: $ref: \u0026#39;#/components/schemas/Pet\u0026#39; examples: new_pet: value: |- { \u0026#34;id\u0026#34;: {{ randomInt(5,10) }}, \u0026#34;name\u0026#34;: \u0026#34;{{ request.body/name }}\u0026#34; } Microcks has this ability to generate dynamic mock content. The new_pet example fragment above embeds two specific notations that are:\n{{ randomInt(5,10) }} for asking Microcks to generate a random integer between 5 and 10 for us (remember: the other pets have ids going from 1 to 4), {{ request.body/name }} for asking Microcks to reuse here the name property of the request body. Simply. When imported into Microcks, you should have following result:\nYou can see in the picture above that the Dispatcher has no value as we have no parameters in operation. But this does not prevent use to use both parameters and template functions. In fact, template also allows you to reuse response parameters to inject in response content. Let\u0026rsquo;s now finally test this mock URL using some content and see what\u0026rsquo;s going on:\n$ curl http://localhost:8585/rest/Petstore+API/1.0.0/pets -H \u0026#39;Content-Type: application/json\u0026#39; -d \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;Rusty\u0026#34;}\u0026#39; -s | jq { \u0026#34;id\u0026#34;: 8, \u0026#34;name\u0026#34;: \u0026#34;Rusty\u0026#34; } As a result we\u0026rsquo;ve got our pet name Rusty being returned with a new id being generated. Ta Dam! 🥳\n🛠️ As a validation, send a few more requests changing your pet name. You\u0026rsquo;ll check that given name is always returned. But you can also go further by defining an advanced dispatcher that will inspect your request body content to decide which response must be sent back. Very useful to describe different creation or error cases!\nWrap-Up In this tutorial we have seen the basics on how Microcks can be used to mock responses of an OpenAPI. We introduced some Microcks concepts like examples, dispatchers and templating features that are used to produce a live simulation. This definitely helps speeding-up the feedback loop on the ongoing design as the development of a frontend consuming this API.\nThanks for reading and let us know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/","title":"Automation","description":"Here below all the guides related to **Automation**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/references/configuration/helm-chart-config/","title":"Helm Chart Parameters","description":"","searchKeyword":"","content":"Introduction One easy way of installing Microcks is via a Helm Chart. Kubernetes version 1.17 or greater is required. It is assumed that you have some kind of Kubernetes cluster up and running available. This can take several forms depending on your environment and needs:\nLightweight Minikube on your laptop, see Minikube project page, A Google Cloud Engine account in the cloud, see how to start a Free trial, Any other Kubernetes distribution provider. Helm 3 Chart Microcks provides a Helm 3 chart that is now available on our own repository: https://microcks.io/helm. This allows you to install Microcks with just 3 commands:\n$ helm repo add microcks https://microcks.io/helm $ kubectl create namespace microcks $ helm install microcks microcks/microcks --version 1.9.1 --namespace microcks \\ --set microcks.url=microcks.$(minikube ip).nip.io \\ --set keycloak.url=keycloak.$(minikube ip).nip.io \\ --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 Values Reference For full instructions and deployment options, we recommend reading the README on GitHub repository.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/github-actions/","title":"Using in GitHub Actions","description":"","searchKeyword":"","content":"Overview This guide shows you how to integrate Microcks into your Github Actions workflows. Microcks provides 2 GitHub Actions for interacting with a Microcks instance from your workflows:\nThe Microcks Import GitHub Action allows you to import Artifacts in a Microcks instance. If import succeeds is pursuing, if not it fails,\nThe Microcks Test GitHub Action allows you to launch a Microcks test on a deployed API endpoint. If test succeeds (ie. API endpoint is conformant with API contract in Microcks) the workflow is pursuing, if not it fails.\nThose 2 actions are basically a wrapper around the Microcks CLI and are using Service Account. They provide the same configuration capabilities. Especially, they\u0026rsquo;re sharing the same mandatory configuration parameters that are:\nmicrocksURL for the Microcks API endpoint, keycloakClientId for the Keycloak Realm Service Account ClientId, keycloakClientSecret for the Keycloak Realm Service Account ClientSecret. 1. Find them in the Marketplace Obviously we can find this action with GitHub Actions Marketplace 😉\nYou may add one of the Action to your Workflow directly from the GitHub UI.\n2. Import GitHub Action The import action, based on the CLI command, has just one argument that specifies a comma separated list of file paths:\n\u0026lt;specificationFile1[:primary],specificationFile2[:primary]\u0026gt;: The file paths with an optional flag telling if it should be imported as primary or not. See Multi-artifacts explanations documentation. Default is true so it is considered as primary. Step 1 - Configure the Action Here\u0026rsquo;s an example below:\nname: my-workflow on: [push] jobs: my-job: runs-on: ubuntu-latest environment: Development steps: - uses: microcks/import-github-action@v1 with: specificationFiles: \u0026#39;samples/weather-forecast-openapi.yml:true,samples/weather-forecast-postman.json:false\u0026#39; microcksURL: \u0026#39;https://microcks.apps.acme.com/api/\u0026#39; keycloakClientId: ${{ secrets.MICROCKS_SERVICE_ACCOUNT }} keycloakClientSecret: ${{ secrets.MICROCKS_SERVICE_ACCOUNT_CREDENTIALS }} Step 2 - Configure the Secrets It\u0026rsquo;s a best practice to use GitHub Secrets (general or tied to Environment like in the example) to hold the Keycloak credentials (client Id and Secret). See below the Secrets configuration we\u0026rsquo;ve used for the example:\n3. Test GitHub Action The test action, based on the CLI command, needs 3 arguments:\n\u0026lt;apiName:apiVersion\u0026gt; : Service to test reference. Exemple: 'Beer Catalog API:0.9' \u0026lt;testEndpoint\u0026gt; : URL where is deployed implementation to test \u0026lt;runner\u0026gt; : Test strategy (one of: HTTP, SOAP, SOAP_UI, POSTMAN, OPEN_API_SCHEMA, ASYNC_API_SCHEMA) And some optional ones tha are the same as the CLI that you may find in the Microcks Test GitHub Action repository.\nStep 1 - Configure the Action Here\u0026rsquo;s an example below:\nname: my-workflow on: [push] jobs: my-job: runs-on: ubuntu-latest environment: Development steps: - uses: microcks/test-github-action@v1 with: apiNameAndVersion: \u0026#39;API Pastry - 2.0:2.0.0\u0026#39; testEndpoint: \u0026#39;http://my-api-pastry.apps.cluster.example.com\u0026#39; runner: OPEN_API_SCHEMA microcksURL: \u0026#39;https://microcks.apps.acme.com/api/\u0026#39; keycloakClientId: ${{ secrets.MICROCKS_SERVICE_ACCOUNT }} keycloakClientSecret: ${{ secrets.MICROCKS_SERVICE_ACCOUNT_CREDENTIALS }} waitFor: \u0026#39;10sec\u0026#39; Step 2 - Configure the Secrets It\u0026rsquo;s a best practice to use GitHub Secrets (general or tied to Environment like in the example) to hold the Keycloak credentials (client Id and Secret). See below the Secrets configuration we\u0026rsquo;ve used for the example:\nWrap-up You have learned how to get and use the Microcks GitHub Actions. The GitHub actions reuse the Microcks CLI and the Service Account and so it\u0026rsquo;s definitely worth the read 😉\nThe most up-to-date information and reference documentation can be found into the repository README.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/mqtt-support/","title":"MQTT Mocking & Testing","description":"","searchKeyword":"","content":"Overview This guide shows you how to use the Message Queuing Telemetry Transport (MQTT) protocol with Microcks. MQTT is a standard messaging protocol for the Internet of Things (IoT). It is used today in a wide variety of industries, such as automotive, manufacturing, telecommunications, oil and gas, etc.\nMicrocks supports MQTT as a protocol binding for AsyncAPI. That means that Microcks is able to connect to a MQTT broker for publishing mock messages as soon as it receives a valid AsyncAPI Specification and to connect to any MQTT broker in your organization to check that flowing messages are compliant to the schema described within your specification.\nLet\u0026rsquo;s start! 🚀\n1. Setup MQTT broker connection First mandatory step here is to setup Microcks so that it will be able to connect to a MQTT broker for sending mock messages. Microcks has been tested successfully with ActiveMQ Artemis as well as Eclipse Mosquitto with MQTT protocol version 3.1.1. Both can be deployed as containerized workload on your Kubernetes cluster. Microcks does not provide any installation scripts or procedures ; please refer to projects or related products documentation.\nIf you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] mqtt: url: mqtt-broker.app.example.com:1883 username: microcks password: microcks The async feature should of course be enabled and then the important things to notice are located in to the mqtt block:\nurl is the hostname + port where broker can be reached by Microcks, username is simply the user to use for authenticating the connection, password represents this user credentials. For now, Microcks does not support connecting to a broker secured using TLS. This is tracked in a RFE here and will be implemented in a near future.\nIf you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] mqtt: url: mqtt-broker.app.example.com:1883 username: microcks password: microcks Actual connection to the MQTT broker will only be made once Microcks will send mock messages to it. Let see below how to use MQTT binding with AsyncAPI.\n2. Use MQTT in AsyncAPI As MQTT is not the default binding into Microcks, you should explicitly add it as a valid binding within your AsyncAPI contract. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use Avro and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.streetlights\u0026#39; [...] channels: smartylighting/streetlights/event/lighting/measured: [...] subscribe: [...] bindings: mqtt: qos: 0 retain: false You\u0026rsquo;ll notice that we just have to add a mqtt non empty block within the operation bindings. Just define one or property (like qos for example) and Microcks will detect this binding has been specified.\nAs usual, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification.\nIn our example we have used references to a shared message structure that is also holding examples. We have defined 3 virtual devices that are sending their lumens measure and the corresponding date, still coming from our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.streetlights\u0026#39; [...] defaultContentType: application/json channels: smartylighting/streetlights/event/lighting/measured: [...] subscribe: [...] bindings: mqtt: qos: 0 retain: false message: $ref: \u0026#39;#/components/messages/lightMeasured\u0026#39; components: messages: lightMeasured: [...] traits: - $ref: \u0026#39;#/components/messageTraits/commonHeaders\u0026#39; payload: $ref: \u0026#39;#/components/schemas/lightMeasuredPayload\u0026#39; examples: - dev0: summary: Example for Device 0 headers: |- {\u0026#34;my-app-header\u0026#34;: 14} payload: |- {\u0026#34;streetlightId\u0026#34;:\u0026#34;dev0\u0026#34;, \u0026#34;lumens\u0026#34;:1000, \u0026#34;sentAt\u0026#34;:\u0026#34;{{now(yyyy-MM-dd\u0026#39;T\u0026#39;HH:mm:SS\u0026#39;Z\u0026#39;)}}\u0026#34;} - dev1: summary: Example for Device 1 headers: my-app-header: 14 payload: streetlightId: dev1 lumens: 1100 sentAt: \u0026#34;{{now(yyyy-MM-dd\u0026#39;T\u0026#39;HH:mm:SS\u0026#39;Z\u0026#39;)}}\u0026#34; - dev2: summary: Example for Device 2 headers: my-app-header: 14 payload: streetlightId: dev2 lumens: 1200 sentAt: \u0026#34;{{now(yyyy-MM-dd\u0026#39;T\u0026#39;HH:mm:SS\u0026#39;Z\u0026#39;)}}\u0026#34; If you\u0026rsquo;re now yet accustomed to it, you may wonder what it this {{now(yyyy-MM-dd'T'HH:mm:SS'Z')}} notation? These are just Templating functions that allow generation of dynamic content! 😉\nNow simply import your AsyncAPI file into Microcks either using a Direct upload import or by defining a Importer Job. Both methods are described in this page.\n3. Validate your mocks Now it’s time to validate that mock publication of messages on the connected broker is correct. In a real world scenario this mean developing a consuming script or application that connects to the topic where Microcks is publishing messages.\nFor our Streetlights API, we have such a consumer in one GitHub repository.\nFollow the following steps to retrieve it, install dependencies and check the Microcks mocks:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/mqttjs-client $ npm install $ node consumer.js mqtt://mqtt-broker.app.example.com:1883 StreetlightsAPI_1.0.0_smartylighting-streetlights-event-lighting-measured microcks microcks Connecting to mqtt://mqtt-broker.app.example.com:1883 on topic StreetlightsAPI_1.0.0_smartylighting-streetlights-event-lighting-measured { \u0026#34;streetlightId\u0026#34;: \u0026#34;dev0\u0026#34;, \u0026#34;lumens\u0026#34;: 1000, \u0026#34;sentAt\u0026#34;: \u0026#34;2021-02-14T10:01:783Z\u0026#34; } { \u0026#34;streetlightId\u0026#34;: \u0026#34;dev1\u0026#34;, \u0026#34;lumens\u0026#34;: 1100, \u0026#34;sentAt\u0026#34;: \u0026#34;2021-02-14T10:01:784Z\u0026#34; } { \u0026#34;streetlightId\u0026#34;: \u0026#34;dev2\u0026#34;, \u0026#34;lumens\u0026#34;: 1200, \u0026#34;sentAt\u0026#34;: \u0026#34;2021-02-14T10:01:785Z\u0026#34; } 🎉 Fantastic! We are receiving the three different messages corresponding to the three defined devices each and every 3 seconds that is the default publication frequency. You\u0026rsquo;ll notice that each sentAt property has a different value thanks to the templating notation.\nNote: this simple consumer.js script is also able to handled TLS connections to your MQTT broker. It was omitted here for sake of simplicity but you can also use commands like: node consumer.js mqtts://artemis-my-acceptor-0-svc-rte-microcks.apps.example.com:443 StreetlightsAPI_1.0.0_smartylighting-streetlights-event-lighting-measured admin mypassword broker.crt\n4. Run AsyncAPI tests Now the final step is to perform some test of the validation features in Microcks. As we will need API implementation for that it’s not as easy as writing HTTP based API implementation, we have some helpful scripts in our api-tooling GitHub repository. This scripts are made for working with the Streetlights API sample we used so far but feel free to adapt them for your own use.\nImagine that you want to validate messages from a QA environment with dedicated MQTT broker. Still being in the mqttjs-client folder, now use the producer.js utility script to publish messages on a streetlights-event-lighting-measured topic:\n$ node producer.js mqtts://mqtt-broker-qa.app.example.com:443 streetlights-event-lighting-measured qa-user qa-password broker-qa.crt Connecting to mqtts://mqtt-broker-qa.app.example.com:443 on topic streetlights-event-lighting-measured { streetlightId: \u0026#39;devX\u0026#39;, lumens: 900, sentAt: \u0026#39;2021-02-15T09:06:42.744Z\u0026#39; } { streetlightId: \u0026#39;devX\u0026#39;, lumens: 900, sentAt: \u0026#39;2021-02-15T09:06:45.750Z\u0026#39; } [...] Do not interrupt the execution of the script for now.\nIf the QA broker access is secured - let\u0026rsquo;s say with credentials and custom certificates - we will first have to manage a Secret in Microcks to hold these informations. Within Microcks console, first go to the Administration section and the Secrets tab.\nAdministration and Secrets will only be available to people having the administrator role assigned. Please check this documentation for details.\nThe screenshot below illustrates the creation of such a secret for your QA MQTT Broker with username, credentials and custom certificates using the PEM format.\nOnce saved we can go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: mqtt://mqtt-broker-qa.app.example.com:443/streetlights-event-lighting-measured that is referencing the MQTT broker endpoint, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API, Timeout: Keep the default of 10 seconds, Secret: This is where you\u0026rsquo;ll select the QA MQTT Broker you previously created. Launch the test and wait for some seconds and you should get access to the test results as illustrated below:\nThis is fine and we can see that Microcks captured messages and validate them against the payload schema that is embedded into the AsyncAPI specification. In our sample, every property is required and message does not allow additionalProperties to be defined.\nSo now let see what happened if we tweak that a bit\u0026hellip; Open the producer.js script in your favorite editor to put comments on line 35 and to remove comments from line 36. It\u0026rsquo;s removing the lumens measure and adding an unexpected location property as shown below after having restarted the producer:\n$ node producer.js mqtts://mqtt-broker-qa.app.example.com:443 streetlights-event-lighting-measured qa-user qa-password broker-qa.crt Connecting to mqtts://mqtt-broker-qa.app.example.com:443 on topic streetlights-event-lighting-measured { streetlightId: \u0026#39;devX\u0026#39;, location: \u0026#39;47.8509682604982, 0.11136576784773598\u0026#39;, sentAt: \u0026#39;2021-02-15T10:04:49.669Z\u0026#39; } { streetlightId: \u0026#39;devX\u0026#39;, location: \u0026#39;47.8509682604982, 0.11136576784773598\u0026#39;, sentAt: \u0026#39;2021-02-15T10:04:52.676Z\u0026#39; } [...] Relaunch a new test and you should get results similar to those below:\n🥳 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application or devices are sending garbage, Microcks will be able to spot this and inform you that the expected message format is not respected.\nWrap-Up In this guide we have seen how Microcks can also be used to send mock messages on a MQTT Broker connected to the Microcks instance. This helps speeding-up the development of application consuming these messages. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected message format and the one effectively used by real-life producers.\nThanks for reading and let you know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/administration/secrets/","title":"Accessing secured Resources","description":"","searchKeyword":"","content":"Overview Quickly after your initial experience with Microcks, you\u0026rsquo;ll realize that it needs to access some of your private resources for smooth integration in your lifecycle. Typically:\nLoading Artifacts may require accessing secured external resources such as Git repositories, Launching tests may require accessing protected HTTPS endpoints or internal message brokers. This guide will explain you what is the concept of Secret in Microcks, how to manage those Secrets and how to use them when defining an Importer Job.\n🚨 Prerequisites\nSecrets can only be managed by Microcks admin - we mean people having the admin role assigned. If you need further information on how to manage users and roles, please check how-to Manage Users.\n1. Authentication Secrets Authentication Secrets (or simply Secrets) are managed by a Microcks administrator and holds credentials for accessing remote resources such as Git repositories, remote API endpoints or event brokers.\nCredentials informations wrapped within a Secret can be of several natude like a User/Password pair, a Token or some X509 certificates.\nSecrets are stored within the Microcks database and may be reused by regular users when creating an Importer Job or launching a new test. At that time, regular users are just referring Secret name and don\u0026rsquo;t get access to the detailed.\nSecrets management is simply a thumbnail with the Administration page that is available from the vertical menu on the left once logged in as administrator.\nLet see how to create/update a secret and its properties below.\n2. Edit Secret properties Let\u0026rsquo;s imagine you want to create a secret that will hold informations on how to access your corporate GitLab instance. Here\u0026rsquo;s the form you\u0026rsquo;ll have to fill below. It may imply authentication method and properties as well as transport encryption information such as the custom certificate to use.\nAuthentication may be realized using different methods described below.\nAuthentication Type Description None No authentication is actually realized. In this case, the secret may only be useful to hold custom certificate to access private resource. Basic Authentication An HTTP Basic authentication is attempted when connecting to remote resource. When selecting this method, form will just ask for a User and a Password. Token Authentication An HTTP Bearer or custtom authentication is attempted with the prvided Token. If no Token-Header is specified, the standard Authentication: Bearer \u0026lt;provided token\u0026gt; is attempter. If a Token-Header is specified, token is added as the value of this specific header. The CA Certificate is just here to gather a custom certificate or certificate chain specified in PEM format.\nUsing the form, you may create as much Secret as you need for different resources. Regular users of the Microcks instance will just have access to the name and the description of the secrets.\n3. Adding a Secret to an Import Job Now that you have created and managed your secrets, they can be reused when defining an Import Job. For doing that, just go and update a job: the second step of the wizard modal is dedicated to security concerns. You may now just add a reference to (thus a usage of) one of your secret.\nWhen Microcks will scheduled and execute this job to check update of artifact resource, it will simply used the referenced secret. Now your job is identified as using a secret with a balck lock 🔒 on the UI:\nEach and every time the scheduled import job will be fired, it will reused the up-to-date informations of the secret to provide the correct token and certificates to the external resource.\nWrap-up Following this guide, you have learned how Authentication Secrets allows you to hold credentials informations for accessing secured remote resources used by Microcks importers or tests. You should now be confident in the way Microcks access these protected resources, letting regular users just reference the Secret name.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/stateful-mocks/","title":"Configuring stateful mocks","description":"","searchKeyword":"","content":"Overview Microcks allows to specifiy dynamic mock content using expressions since the early days. Most of the time, those features help in translating the dynamic behaviour of an API and provide meaningful simulations. However, you may need sometime to provide even more realistic behaviour and that\u0026rsquo;s where stateful mocks may be of interest.\n💡 Stateful mocks are available starting with Microcks 1.10.0.\nIn this guide, we\u0026rsquo;ll go through the different concepts that are used and useful when wanting to configure statful mocks with Microcks. We\u0026rsquo;ll illustrate how to use those concepts on a real use-case of a shopping cart, allowing you to persist chosen items in a customer cart.\nIf you haven\u0026rsquo;t started a Microcks instance yet, you can do so using the following command - maybe replacing 8585 by another port of your choice if this one is not free:\n$ docker run -p 8585:8080 -it --rm quay.io/microcks/microcks-uber:latest Then, you\u0026rsquo;ll need to import the content our stateful-cart-openapi.yaml OpenAPI specification to follow-up explanations in next sections.\n1. Concepts When configuring staful mocks in Microcks, you\u0026rsquo;ll require or need those useful concepts:\nThe SCRIPT dispatcher will be mandatory as it will hold your persistence logic (see the Script explanations), The store is a an implicit service that is available within scripts. It allows you to persist state within a simple Key/Value store. Key and values are simple strings you may process and manage the way you want. Check the some examples in common use-cases for scripts, The requestContext is a request scoped context that allows passing content to response templates, Finally, the templating Context Expressions can be very useful to reuse persisted (or computed) information to mock responses! 🚨 One important thing to notice when using stateful capabilities in Microcks is that state is not persisted forever. The values you\u0026rsquo;ll register in the store are subsject to a Time-To-Live period. This duration is customizable with a default value of 10 seconds.\n🚨 A second important thing to notice when using stateful capabilities in Microcks is that the store is scoped to an API. This means that it is shared between the different operations of the same API but not available to other APIs. You cannot write in a store within an API context and read from the same store from another API.\nWe\u0026rsquo;ll use all those concepts together with our stateful-cart-openapi.yaml specification. Please use this OpenAPI file as a reference for the next sections. In each in every section, we\u0026rsquo;ll put the light on the specification details that allow enabling statefulness in mocks.\n2. Retrieving state In our shopping cart use-case, the first operation to consider is GET /cart that application must use to get the status of a specific customer cart. In our sample, the customer identifier is provided as a request header named customerId. We want to use Microcks\u0026rsquo; stateful store to retrieve the cart items stored under a key \u0026lt;customierId\u0026gt;-items and compute the cart total price.\nUsing a SCRIPT dispatcher, we can write the following Groovy script to do so:\n// Retrieve customer id and associated items if any. def customerId = mockRequest.getRequestHeaders().get(\u0026#34;customerId\u0026#34;, \u0026#34;null\u0026#34;) def items = store.get(customerId + \u0026#34;-items\u0026#34;) // If items exist, convert them into objects and compute total price. if (items != null) { def cartItems = new groovy.json.JsonSlurper().parseText(items) def totalPrice = 0.0 for (item in cartItems) { totalPrice += item.price * item.quantity } // Fill context with store iteams and computed price. requestContext.items = items requestContext.totalPrice = totalPrice } else { // No items: fill context with empty and 0.0 price. requestContext.items = [] requestContext.totalPrice = 0.0 } return \u0026#34;Cart\u0026#34; We simply used the store.get(key) function here to read a previously persisted state. You see that this script returns a single result that is the name of the response to use: Cart. A generic cart representation is actually specified within the stateful-cart-openapi.yaml OpenAPI specification. It uses template expressions to retrieve information either from the incoming request or from the current requestContext:\n[...] responses: 200: content: application/json: schema: [...] examples: Cart: value: |- { \u0026#34;customerId\u0026#34;: \u0026#34;{{request.headers[customerid]}}\u0026#34;, \u0026#34;items\u0026#34;: {{ items }}, \u0026#34;totalPrice\u0026#34;: {{ totalPrice }} } [...] As a first test, you may check the initial state of our cart by issuing the following request to the mock endpoints provided by Microcks:\n# Check johndoes\u0026#39;s cart $ curl -X GET \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;customerId\u0026#34;: \u0026#34;johndoe\u0026#34;, \u0026#34;items\u0026#34;: [], \u0026#34;totalPrice\u0026#34;: 0 } You\u0026rsquo;ve used a stateful mock in Microcks, congrats! 🎉 Ok, you didn\u0026rsquo;t notice any change at the moment as we didn\u0026rsquo;t persist anything but that\u0026rsquo;s for the next section 😉\n3. Persisting state We\u0026rsquo;re now going to persist some state within the PUT /cart/items operation that application must use to add new items into the cart. When sending a new item description (a productId, a quantity and a unit price), we\u0026rsquo;re going to add to this item to customer cart so that the status will be updaetd using the previous operation.\nWe can write the following Groovy script to do so:\n// Retrieve customer id and associated items if any. def customerId = mockRequest.getRequestHeaders().get(\u0026#34;customerId\u0026#34;, \u0026#34;null\u0026#34;) def items = store.get(customerId + \u0026#34;-items\u0026#34;) def cartItems = [] // If items exist, convert them in objects. if (items != null) { cartItems = new groovy.json.JsonSlurper().parseText(items) } // Parse request input and add a new object in cart items. def item = new groovy.json.JsonSlurper().parseText(mockRequest.requestContent) cartItems.add([productId: item.productId, quantity: item.quantity, price: item.price]) // Store customier items for 60 seconds. store.put(customerId + \u0026#34;-items\u0026#34;, groovy.json.JsonOutput.toJson(cartItems), 60) return \u0026#34;One item\u0026#34; Here, we\u0026rsquo;ve use the store.put(key, value, ttl) function, recording our state for 60 seconds. Also, we included some logic to parse JSON text to objects and converts them back to text as the persisted value is a regular character string. The script returns one single output that is the name of the response representation to use: One item. This representation is included in this operation OpenAPI specification and directly uses {{ }} expressions to just output the incoming request informations:\n[...] responses: 201: content: application/json: schema: $ref: \u0026#39;#/components/schemas/Item\u0026#39; examples: One item: value: |- { \u0026#34;productId\u0026#34;: \u0026#34;{{request.body/productId}}\u0026#34;, \u0026#34;quantity\u0026#34;: {{request.body/quantity}}, \u0026#34;price\u0026#34;: {{request.body/price}} } [...] We may now fully test that we\u0026rsquo;re able to save a state by adding items and then read it back when asking for the full cart status. Let\u0026rsquo;s do this using the 2 commands on Microcks\u0026rsquo; endpoints below:\n# Add a millefeuille to the cart for user johndoe $ curl -X PUT \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart/items\u0026#39; -d \u0026#39;{\u0026#34;productId\u0026#34;:\u0026#34;Millefeuille\u0026#34;,\u0026#34;quantity\u0026#34;:2,\u0026#34;price\u0026#34;:4.0}\u0026#39; -H \u0026#39;Content-Type: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;productId\u0026#34;: \u0026#34;Millefeuille\u0026#34;, \u0026#34;quantity\u0026#34;: 2, \u0026#34;price\u0026#34;: 4.0 } # Check johndoes\u0026#39;s cart $ curl -X GET \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;customerId\u0026#34;: \u0026#34;johndoe\u0026#34;, \u0026#34;items\u0026#34;: [ { \u0026#34;productId\u0026#34;: \u0026#34;Millefeuille\u0026#34;, \u0026#34;quantity\u0026#34;: 2, \u0026#34;price\u0026#34;: 4 } ], \u0026#34;totalPrice\u0026#34;: 8 } Ho ho ho! We now have super-smart mocks that persist and retreive state but also integrates computed elements in the response! Just with a few lines of Groovy scripts! 🕺\n4. Removing state The final thing to explore in this guide is how to remove some state information from the store. We\u0026rsquo;ll consider for that the POST /cart/empty operation that can be triggered to remove all the items within a shopping cart.\nLet\u0026rsquo;s check the following Groovy snippet to do this:\ndef customerId = mockRequest.getRequestHeaders().get(\u0026#34;customerId\u0026#34;, \u0026#34;null\u0026#34;) def items = store.delete(customerId + \u0026#34;-items\u0026#34;) return \u0026#34;Cart\u0026#34; Pretty easy, no? It\u0026rsquo;s just a matter of callling the store.delete(key) function. Here again, the script returns a single Cart response that is the generic rerpesentation of an empty shopping chart for current user:\n[...] responses: 200: content: application/json: schema: [...] examples: Cart: value: |- { \u0026#34;customerId\u0026#34;: \u0026#34;{{request.headers[customerid]}}\u0026#34;, \u0026#34;items\u0026#34;: [], \u0026#34;totalPrice\u0026#34;: 0.0 } [...] As a final test, we may now check that we are able to add items to a cart, retrieve this cart items, delete all the items from the cart and check that we finally read an empty cart status. Let\u0026rsquo;s go!\n# Add a Baba au Rhum curl -X PUT \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart/items\u0026#39; -d \u0026#39;{\u0026#34;productId\u0026#34;:\u0026#34;Baba Rhum\u0026#34;,\u0026#34;quantity\u0026#34;:1,\u0026#34;price\u0026#34;:4.1}\u0026#39; -H \u0026#39;Content-Type: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; # Check johndoes\u0026#39;s cart $ curl -X GET \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;customerId\u0026#34;: \u0026#34;johndoe\u0026#34;, \u0026#34;items\u0026#34;: [ { \u0026#34;productId\u0026#34;: \u0026#34;Millefeuille\u0026#34;, \u0026#34;quantity\u0026#34;: 2, \u0026#34;price\u0026#34;: 4 }, { \u0026#34;productId\u0026#34;: \u0026#34;Baba Rhum\u0026#34;, \u0026#34;quantity\u0026#34;: 1, \u0026#34;price\u0026#34;: 4.1 } ], \u0026#34;totalPrice\u0026#34;: 12.1 } # Empty johndoe\u0026#39;s cart $ curl -X POST \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart/empty\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;customerId\u0026#34;: \u0026#34;johndoe\u0026#34;, \u0026#34;items\u0026#34;: [], \u0026#34;totalPrice\u0026#34;: 0 } # Check johndoes\u0026#39;s cart $ curl -X GET \u0026#39;http://localhost:8585/rest/Cart+API/1.0.0/cart\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;customerId: johndoe\u0026#39; { \u0026#34;customerId\u0026#34;: \u0026#34;johndoe\u0026#34;, \u0026#34;items\u0026#34;: [], \u0026#34;totalPrice\u0026#34;: 0 } Wrap-up Starting with 1.10.0 release, Microcks mocks can now become stateful. Automaticaly turning mocks into stateful simulations is impossible as there are numerous design guidelines that need to be considered, and after all, the world is definitely not only CRUD 😉\nAt Microcks we took the approach to put this power in user\u0026rsquo;s hand, providing powerful primitives like scripts, store, requestContext and template expressions to manage persistence where it makes sense for your simulations.\nYou\u0026rsquo;ve seen these different primitives - store.get(), store.put(), store.delete() functions - in action during this how-to guide. Remember that the things you\u0026rsquo;ve learned here are not restricted to REST APIs but are also applicable to other API types like GraphQL, gRPC and SOAP!\nHappy mocking! 🤡\n"},{"section":"Documentation","url":"https://microcks.io/documentation/overview/alternatives/","title":"Alternatives","description":"","searchKeyword":"","content":" Comparison with alternatives is always a tough question 🤔\nPlease check this neutral Wikipedia page for more inisghts: Comparison of API simulation tools\nIf you would like a more opinionated description on \u0026ldquo;How Microcks compares to Pact for Contract Testing?\u0026rdquo;, you may want to read this Medium blog post by one of the project co-founder: Microcks and Pact for API contract testing.\nFinally, if you\u0026rsquo;re wondering why we think Microcks is unique in terms of Development Lifecycle coverage, you may check How Microcks fit and unify Inner and Outer Loops for cloud-native development the other co-founder of the project.\nHappy reading! 😉\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/graphql-conventions/","title":"GraphQL Conventions","description":"","searchKeyword":"","content":"In order to use GraphQL in Microcks, you will need two artifacts for each API definition as explained in Multi-artifacts support:\nA GraphQL IDL Schema definition that holds the API metadata and operations definitions, A Postman Collection file that holds the mock examples (requests and responses) for the different operations of the GraphQL API. Conventions In order to be correctly imported and understood by Microcks, your GraphQL IDL and Postman files should follow a little set of reasonable conventions and best practices.\nGraphQL Schema doesn\u0026rsquo;t have the notion of API name or version. In Microcks, this notion is critical and we thus we will need to have a specific comment notation to get this information. You\u0026rsquo;ll need to add a comment line starting with microcksId: in your schema file and then referring the \u0026lt;API name\u0026gt;:\u0026lt;API version\u0026gt;. See an example below: # microcksId: Movie Graph API : 1.0 schema { query: Query mutation: Mutation } [...] Your Postman collection description will need to have a name that matches the GraphQL API name and a custom property version that matches the above referenced version, Your Postman collection will need to organize examples into requests having the same name and url as the GraphQL queries or mutations, Your Postman collection will then simply hold examples defined in JSON, defining the value for all different fields of a response. Microcks will later apply field selection as required in GraphQL. We recommend having a look at our sample GraphQL API for the Movie Graph API as well as the companion Postman collection to fully understand and see those conventions in action.\nDispatchers GraphQL API mocks in Microcks support 5 different types of dispatcher. The first two can be directly inferred by Microcks during the import of GraphQL Schema:\nempty dispatcher means that Microcks will pick the first available response of operation. It is deduced for queries with no arguments like the allFilms operation, QUERY_ARGS dispatcher is deduced for queries or mutations presentation simple scalar typed arguments like for example the film query that allows finding by identifier. Other dispatching strategies can then be set up with dispatcher and dispatherRules customization:\nJSON_BODY dispatcher can be used for dispatching based on the content of GraphQL variables. The JSON representing variables is injected as the reference body and is used for evaluation, SCRIPT dispatcher can be used for dispatching based on the content of the complete HTTP Request body. Basically, you\u0026rsquo;ll receive the whole POST request and have to return the name of response to return based on whatever criteria, FALLBACK dispatcher can finally be used in combination with any of the 4 other dispatchers to provide fallback behavior. Illustration Let\u0026rsquo;s dive in details of our sample Movie Graph API.\nSpecifying API structure This is a fairly basic GraphQL API that is inspired by the famous Filmsamples you find on Graphql.org. You can see below the definition using Schema IDL below found in films.graphql file:\n# microcksId: Movie Graph API : 1.0 schema { query: Query mutation: Mutation } type Film { id: String! title: String! episodeID: Int! director: String! starCount: Int! rating: Float! } type FilmsConnection { totalCount: Int! films: [Film] } input Review { comment: String rating: Int } type Query { allFilms: FilmsConnection film(id: String): Film } type Mutation { addStar(filmId: String): Film addReview(filmId: String, review: Review): Film } Considering the first comment line of this file, when imported into Microcks, it will discover the Movie Graph API with version 1.0.0 and four operations that are: allFilms, film, addStar and addReview.\nSpecifying API examples Specification of examples is done using a Postman Collection as examples cannot be attached to main GraphQL Schema and thanks multi-artifacts support feature.\nUsing Postman, just create a new Collection - using the same name as GraphQL API and adding the custom property version at the beginning of description like illustrated below:\nYou can now start organizing and creating requests that are matching with the GraphQL API queries or mutations operation name. For our example, we\u0026rsquo;re specifying the four operations: allFilms, film, addStar and addReview.\n💡 If you have imported or define you GraphQL Schema API into Postman, you can also directly create a Collection from it. In that case, Postman will auto-organize content using the /queries and /mutations folders like below. This is convenient but not required by Microcks.\nThe next step is now to create a bunch of examples for each of the requests/operations of your Collection as explained by the Postman documentation. You\u0026rsquo;ll give each example a meaningful name regarding the use-case it is supposed to represent. Example url must also match the name of the GraphQL operation method; here we have a simple {{url}} because the url at the upper request level but this one must have the http://allFilms value.\nYou\u0026rsquo;ll define examples using simple JSON for request body and for response body as well. Below is a basic example but Templating expressions and functions are obviously supported:\n💡 One particular things of GraphQL is that it allows API consumers to select the fields they want to have in response. Microcks GraphQL mocks are smart enough to realize that filtering but you should take care to define every possible fields value in your Collection examples. Otherwise, missing fields could not be retrieved by consumers.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/kind-helm/","title":"On Kind with Helm","description":"","searchKeyword":"","content":"Overview This guide will walk you through the different steps of running a full Microcks installation on your laptop using Kind. The step #4 is actually optional and may only be of interest if you\u0026rsquo;d like to use Asynchronous features of Microcks.\nThe installation notes were ran on an Apple Mac book M2 but those steps would sensibly be the same on any Linux machine.\nLet\u0026rsquo;s go 🚀\n1. Preparation As being on a Macr, people usually use brew to install kind. However, it is also available from several different package managers out there. You can check the Quick Start guide for that. Obviously, you\u0026rsquo;ll also need the kubectl utility to interact with your cluster.\n$ brew install kind $ kind --version kind version 0.20.0 In a dedicated folder, prepare a cluster-kind.yaml configuration file like this:\n$ cd ~/tmp $ mkdir microcks \u0026amp;\u0026amp; cd microcks $ cat \u0026gt; cluster-kind.yaml \u0026lt;\u0026lt;EOF kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: \u0026#34;ingress-ready=true\u0026#34; extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP EOF 2. Start and configure a cluster We\u0026rsquo;re now going to start a Kube cluster. Start your kind cluster using the cluster-kind.yaml configuration file we just created before:\n$ kind create cluster --config=cluster-kind.yaml --- OUTPUT --- Creating cluster \u0026#34;kind\u0026#34; ... ✓ Ensuring node image (kindest/node:v1.27.3) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to \u0026#34;kind-kind\u0026#34; You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 Install an Ingress Controller in this cluster, we selected nginx but other options are available (see https://kind.sigs.k8s.io/docs/user/ingress ).\n$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml Wait for the controller to be available:\n$ kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=90s 3. Install Microcks with default options We\u0026rsquo;re now going to install Microcks with basic options. We\u0026rsquo;ll do that using the Helm Chart so you\u0026rsquo;ll also need the helm binary. You can use brew install helm on Mac for that.\n$ kubectl create namespace microcks $ helm repo add microcks https://microcks.io/helm $ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.127.0.0.1.nip.io --set keycloak.url=keycloak.127.0.0.1.nip.io --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --- OUTPUT --- NAME: microcks LAST DEPLOYED: Sun Dec 3 19:27:27 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.127.0.0.1.nip.io. GRPC mock service is available at \u0026#34;microcks-grpc.127.0.0.1.nip.io\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.127.0.0.1.nip.io to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Wait for images to be pulled, pods to be started and ingresses to be there:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-577874c5b6-z97zm 1/1 Running 0 73s microcks-keycloak-7477cd4fbb-tbmg7 1/1 Running 0 21s microcks-keycloak-postgresql-868b7dbdd4-8zrbv 1/1 Running 0 10m microcks-mongodb-78888fb67f-47fwh 1/1 Running 0 10m microcks-postman-runtime-5d8fc9695-kp45w 1/1 Running 0 10m $ kubectl get ingresses -n microcks --- OUTPUT --- NAME CLASS HOSTS ADDRESS PORTS AGE microcks \u0026lt;none\u0026gt; microcks.127.0.0.1.nip.io localhost 80, 443 10m microcks-grpc \u0026lt;none\u0026gt; microcks-grpc.127.0.0.1.nip.io localhost 80, 443 10m microcks-keycloak \u0026lt;none\u0026gt; keycloak.127.0.0.1.nip.io localhost 80, 443 10m Start opening https://keycloak.127.0.0.1.nip.io in your browser to validate the self-signed certificate. Once done, you can visit https://microcks.127.0.0.1.nip.io in your browser, validate the self-signed certificate and start playing around with Microcks!\nThe default user/password is admin/microcks123\n4. Install Microcks with asynchronous options In this section, we\u0026rsquo;re doing a complete install of Microcks, enabling the asynchronous protcols features. This requires deploying additional pods and a Kafka cluster. Microcks install can install and manage its own cluster using the Strimzi project.\nTo be able to expose the Kafka cluster to the outside of Kind, you’ll need to enable SSL passthrough on nginx: This require updating the default ingress controller deployment:\n$ kubectl patch -n ingress-nginx deployment/ingress-nginx-controller --type=\u0026#39;json\u0026#39; \\ -p \u0026#39;[{\u0026#34;op\u0026#34;:\u0026#34;add\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/spec/template/spec/containers/0/args/-\u0026#34;,\u0026#34;value\u0026#34;:\u0026#34;--enable-ssl-passthrough\u0026#34;}]\u0026#39; Then, you have to install the latest version of Strimzi that provides an easy way to setup Kafka on Kubernetes:\n$ kubectl apply -f \u0026#39;https://strimzi.io/install/latest?namespace=microcks\u0026#39; -n microcks Now, you can install Microcks using the Helm chart and enable the asynchronous features:\n$ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.127.0.0.1.nip.io --set keycloak.url=keycloak.127.0.0.1.nip.io --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --set features.async.enabled=true --set features.async.kafka.url=kafka.127.0.0.1.nip.io --- OUTPUT --- NAME: microcks LAST DEPLOYED: Sun Dec 3 20:14:38 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.127.0.0.1.nip.io. GRPC mock service is available at \u0026#34;microcks-grpc.127.0.0.1.nip.io\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.127.0.0.1.nip.io to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Kafka broker has been deployed on microcks-kafka.kafka.127.0.0.1.nip.io. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath=\u0026#39;{.data.ca\\.crt}\u0026#39; | base64 -d \u0026gt; ca.crt Watch and check the pods you should get in the namespace:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-6ffcc7dc54-c9h4w 1/1 Running 0 68s microcks-async-minion-7f689d9ff7-ptv4c 1/1 Running 2 (40s ago) 48s microcks-kafka-entity-operator-585dc4cd45-24tvp 3/3 Running 0 2m19s microcks-kafka-kafka-0 1/1 Running 0 2m41s microcks-kafka-zookeeper-0 1/1 Running 5 (4m56s ago) 6m43s microcks-keycloak-77447d8957-fwhv6 1/1 Running 0 87s microcks-keycloak-postgresql-868b7dbdd4-pb52g 1/1 Running 0 2m43s microcks-mongodb-78888fb67f-7t2vf 1/1 Running 4 (3m57s ago) 8m2s microcks-postman-runtime-857c577dfb-d597r 1/1 Running 0 8m2s strimzi-cluster-operator-95d88f6b5-p8bvs 1/1 Running 0 16m Now you can extract the Kafka cluster certificate using kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath='{.data.ca\\.crt}' | base64 -d \u0026gt; ca.crt and apply the checks found at Async Features with Docker Compose\nStart with loading the User signed-up API sample within your Microcks instance - remember that you have to validate the self-signed certificates like in the basic install first.\nNow connect to the Kafka broker pod to check a topic has been correctly created and that you can consume messages from there:\n$ kubectl -n microcks exec microcks-kafka-kafka-0 -it -- /bin/sh --- INPUT --- sh-4.4$ cd bin sh-4.4$ ./kafka-topics.sh --bootstrap-server localhost:9092 --list UsersignedupAPI-0.1.1-user-signedup __consumer_offsets microcks-services-updates sh-4.4$ ./kafka-console-consumer.sh --bootstrap-server microcks-kafka-kafka-bootstrap:9092 --topic UsersignedupAPI-0.1.1-user-signedup {\u0026#34;id\u0026#34;: \u0026#34;eNc5TNaPlHAKa38XQA8N7HkSRHl7Yvm1\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699907417\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;g9uDUhXPOPtwK9bZYSGmqbxHAC3tTxAz\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699907428\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;kllBuhcv3kTRNg75sFxWH6HGLtSbpXwZ\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699917413\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;YE2ZAdVwSK9JLGEyLFebHxMOVfmYlzs1\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699917426\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} ^CProcessed a total of 4 messages sh-4.4$ exit exit command terminated with exit code 130 And finally, from your Mac host, you can install the kcat utility to consume messages as well. You\u0026rsquo;ll need to refer the ca.crt certificate you previsouly extracted from there:\n$ kcat -b microcks-kafka.kafka.127.0.0.1.nip.io:443 -X security.protocol=SSL -X ssl.ca.location=ca.crt -t UsersignedupAPI-0.1.1-user-signedup --- OUTPUT --- % Auto-selecting Consumer mode (use -P or -C to override) {\u0026#34;id\u0026#34;: \u0026#34;zYcAzFlRoTGvu9Mu4ajg30lr1fBa4Kah\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699827456\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;v0TkDvd1Z7RxynQvi1i0NmXAaLPzuYXE\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699827585\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;JK55813rQ938Hj50JWXy80s5KWC61Uvr\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703699837416\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;MZnR6UeKVXMhJET6asTjafPpfldiqXim\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703699837430\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} [...] % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 30 ^C% 5. Delete everything and stop the cluster Deleting the microcks Helm release from your cluster is straightforward. Then you can finally delete your Kind cluster to save some resources!\n$ helm delete microcks -n microcks --- OUTPUT --- release \u0026#34;microcks\u0026#34; uninstalled $ kind delete cluster --- OUTPUT --- Deleting cluster \u0026#34;kind\u0026#34; ... Deleted nodes: [\u0026#34;kind-control-plane\u0026#34;] Wrap-up You\u0026rsquo;ve been through this guide and learned how to install Microcks on a Kubernetes cluster using Helm. Congrats! 🎉\nIf you\u0026rsquo;d like to learn more about all the available installation parameters, you can check our Helm Chart Parameters reference documentation.\nHappy learning!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/first-graphql-mock/","title":"Your 1st GraphQL mock","description":"","searchKeyword":"","content":"Overview This tutorial is a step-by-step walkthrough on how to use GraphQL schemas to get mocks for your GraphQL API. This is hands-on introduction to GraphQL Conventions reference that brings all details on conventions being used.\nWe will go through a practical example based on the famous PetStore API. We’ll build the reference petstore-1.0.graphql file by iterations, highlighting the details to get you starting with mocking GraphQL on Microcks.\nTo complete this tutorial, you will need one additional tool: Postman to define sample data that will be used by your mocks. To validate that our mock is working correctly, you\u0026rsquo;ll be able to reuse Postman as-well but we\u0026rsquo;ll also provide simple curl commands.\nLet\u0026rsquo;s go! 💥\n1. Setup Microcks and GraphQL schema skeleton First mandatory step is obviously to setup Microcks 😉. For GraphQL usage, we don\u0026rsquo;t need any particular setup and the simple docker way of deploying Microcks as exposed in Getting started is perfectly suited. Following the getting started, you should have a Microcks running instance on http://localhost:8585.\nThis could be on another port if 8585 is already used on your machine.\nNow let\u0026rsquo;s start with the skeleton of our GraphQL schema for the PetStore API. We\u0026rsquo;ll start with general information on this API and with definition of one type and one query:\nPet is the data structure that represents a registered pet in our store - it has an id, a name and a color, allPets is the query that allows fetching all the registered pets as an API call result. One important thing with GraphQL conventions in Microcks is that we must add an additional specific comment in this schema file so that we can identity your API name and version (something GraphQL Schema does not allow us to handle by default). The microcksId: comment simply identity the API name and version separated with a colon (:).\nHere\u0026rsquo;s the first iteration of our GraphQL Schema:\n# microcksId: Petstore Graph API : 1.0 schema { query: Query } type Pet { id: ID! name: String! color: String! } type Query { allPets: [Pet]! } From now, you can save this as a file on your disk, then go to the Importers page in the left navigation menu and choose to Upload this file. The file should import correctly and you should receive a toast notifiation on the upper right corner. Then, while browsing APIs | Services, you should get acess to the following details in Microcks:\n2. Specifying mock data with Postman We have loaded a GraphQL schema definition in Microcks that correctly discovered the structure of your API, but you have no sample data loaded at the moment. We\u0026rsquo;re going to fix this using Postman and create a Collection to hold our mock data.\nIn your Postman Workspace, start creating a new standard and empty Collection. As one of our conventions, your Collection must have the full name of your GraphQL API: Petstore Graph API. The documentation summary you put in the Collection must also start with version=1.0 like illustrated below:\nHaving the same name and the same version in the Postman Collection is very important as it will allow Microcks to merge this information with the one from the Protobuf file.\nWe will use this Collection to specify sample data for our mock. This is a three step process that is illustrated below in the slider (you can the blue dots to freeze the swiper below):\n1️⃣ Add a new Request named allPets. Change this request to be a POST request and update its URL to http://allPets. This will ensure Microcks will asoociate it to the correct GraphQL operation,\n2️⃣ On this request, add a new example with the name of your choice. Edit this example to put a list of Pets as the result body. Your can copy/paste the JSON snippet below:\n{ \u0026#34;data\u0026#34;: { \u0026#34;allPets\u0026#34;: [ {\u0026#34;id\u0026#34;: \u0026#34;1\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Zaza\u0026#34;, \u0026#34;color\u0026#34;: \u0026#34;blue\u0026#34;}, {\u0026#34;id\u0026#34;: \u0026#34;2\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Tigress\u0026#34;, \u0026#34;color\u0026#34;: \u0026#34;stripped\u0026#34;}, {\u0026#34;id\u0026#34;: \u0026#34;3\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34;, \u0026#34;color\u0026#34;: \u0026#34;calico\u0026#34;}, {\u0026#34;id\u0026#34;: \u0026#34;4\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34;,\u0026#34;color\u0026#34;: \u0026#34;stripped\u0026#34;} ] } } 3️⃣ Finally, export your Collection to a local file with the name of your choice. You can find ours in the PetstoreGraph.postman.json file.\n🚨 Take care of saving your edits before exporting!\n3. Basic query of GraphQL API It\u0026rsquo;s now the time to import this Postman Collection back in Microcks and see the results! Go to the Importers page in the left navigation menu and choose to Upload this file. Proceed with care because this time you need to tick the box telling Microcks to consider the Collection as a Secondary Artifact like below:\nYour GraphQL API details should now have been updated with the samples you provided via the Postman Collection:\n🤔 You may have noticed in the above screenshot that dispatching properties are empty for now. This is normal as we\u0026rsquo;re on a basic operation with no routing logic. We\u0026rsquo;ll talk about dispatchers in next section.\nMicrocks has found allPets as a valid sample to build a simulation upon. A mock URL has been made available. We can use this to test the query as demonstrated below with a curl command:\n$ echo \u0026#39;{ \u0026#34;query\u0026#34;: \u0026#34;query { allPets }\u0026#34; }\u0026#39; | tr -d \u0026#39;\\n\u0026#39; | curl \\ -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -s -d @- \\ http://localhost:8585/graphql/Petstore+Graph+API/1.0 { \u0026#34;data\u0026#34;:{ \u0026#34;allPets\u0026#34;:[ { \u0026#34;id\u0026#34;:\u0026#34;1\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Zaza\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;blue\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;2\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Tigress\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;stripped\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;3\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Maki\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;calico\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;4\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Toufik\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;stripped\u0026#34; } ] } } This is nice! However remember that one of GraphQL most powerful feature is to allow consumers to specify the data they actually need. What if we only care about pets id and color? Let\u0026rsquo;s try a new filtered query:\n$ echo \u0026#39;{ \u0026#34;query\u0026#34;: \u0026#34;query { allPets { id color } }\u0026#34; }\u0026#39; | tr -d \u0026#39;\\n\u0026#39; | curl \\ -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -s -d @- \\ http://localhost:8585/graphql/Petstore+Graph+API/1.0 { \u0026#34;data\u0026#34;:{ \u0026#34;allPets\u0026#34;:[ { \u0026#34;id\u0026#34;:\u0026#34;1\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;blue\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;2\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;stripped\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;3\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;calico\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;4\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;stripped\u0026#34; } ] } } Fantastic! 🙌 Microcks is applying GraphQL semantics and filter your mock data!\n💡 As a consequence you understand the importance with GraphQL of providing value for all the mock attributes. This doesn\u0026rsquo;t mean that your consumers will receive everything but you\u0026rsquo;ll oferr them the ability to apply GraphQL semantics.\nThis is your first GraphQL mock 🎉 Nice achievement!\n4. Using query variables in GraphQL query Let\u0026rsquo;s make things a bit more elaborated by adding query arguments. Now assume we want to provide a simple searching method to retrieve all pets in store using simple filter. We\u0026rsquo;ll end up adding a new searchPets() method in your API. Of course, we\u0026rsquo;ll have to define a name input argument so that users will specify name=zoe to get all the pets having zoe in name.\nSo we\u0026rsquo;ll add a new query in our GraphQL schema like below:\ntype Query { allPets: [Pet]! searchPets(name: String!): [Pet] } You can then import the updated GraphQL file into Microcks using the upload dialog but without ticking the box as we want to update our service definition and not simply add test data. You can check the updated result:\nWhat about the dispatcher property we mentioned earlier? You can see that it now have the QUERY_ARG value. Because of the presence of arguments in the new query definition, Microcks has inferred a routing logic based on this argument. If you get access to the operation details, you\u0026rsquo;ll see that the associated rule is name. Microcks will use the name to route incoming GraphQL query.\nLet\u0026rsquo;s complete our Postman Collection with a new request for the new searchPets method and a new example for searching for pets having a k in their name. This time it can be useful to provide also an example for the request body that is now using a variable identified with $name:\n🚨 Take care of saving your edits before exporting!\nImport this updated Postman Collection back in Microcks - this time you need to tick the box - and see the results:\nLet\u0026rsquo;s try the new GraphQL query mock with this command, this time specifying the variables property to provide a name:\n$ echo \u0026#39;{ \u0026#34;query\u0026#34;: \u0026#34;query search($name: String) { searchPets(name: $name) }\u0026#34;, \u0026#34;variables\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;k\u0026#34; } }\u0026#39; | tr -d \u0026#39;\\n\u0026#39; | curl \\ -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -s -d @- \\ http://localhost:8585/graphql/Petstore+Graph+API/1.0 { \u0026#34;data\u0026#34;:{ \u0026#34;searchPets\u0026#34;:[ { \u0026#34;id\u0026#34;:\u0026#34;3\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Maki\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;calico\u0026#34; }, { \u0026#34;id\u0026#34;:\u0026#34;4\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Toufik\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;stripped\u0026#34; } ] } } 🎉 Fantastic! We now have a mock with routing logic based on request arguments.\n💡 Microcks dispatcher can support multiple arguments to find appropriate response to an incoming request. In that case, the dispatcher rule will have the form of arg_1 \u0026amp;\u0026amp; arg_2 \u0026amp;\u0026amp; arg_3.\n🛠️ As an exercice to validate your understanding, just add a new i pets sample so that when user specify a filter with value i, the 3 correct cats are returned (Tigresse, Maki and Toufik). Once both cases are passing, you can also try some more advanced query like the one below. Yes, Microcks supports advanced GraphQL semantics like composite queries and fragments 😉\n$ echo \u0026#39;{ \u0026#34;query\u0026#34;: \u0026#34;{ k_pets: searchPets(name: \\\u0026#34;k\\\u0026#34;) { ...comparisonFields } i_pets: searchPets(name: \\\u0026#34;i\\\u0026#34;) { ...comparisonFields } } fragment comparisonFields on Pet { name }\u0026#34; }\u0026#39; | tr -d \u0026#39;\\n\u0026#39; | curl \\ -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -s -v -d @- \\ http://localhost:8585/graphql/Petstore+Graph+API/1.0 { \u0026#34;data\u0026#34;:{ \u0026#34;k_pets\u0026#34;:[ {\u0026#34;name\u0026#34;:\u0026#34;Maki\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;Toufik\u0026#34;} ], \u0026#34;i_pets\u0026#34;:[ {\u0026#34;name\u0026#34;:\u0026#34;Tigress\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;Maki\u0026#34;}, {\u0026#34;name\u0026#34;:\u0026#34;Toufik\u0026#34;} ] } } 5. Mocking a mutation operation And now the final step! Let\u0026rsquo;s deal with a new method that allows registering a new pet within the Petstore. For that, you\u0026rsquo;ll typically have to define a new createPet() method on the PetstoreService. In order to be meaningful to the user of this operation, a mock would have to integrate some logic that reuse contents from the incoming request and/or generate sample data. That\u0026rsquo;s typically what we\u0026rsquo;re going to do in this last section 😉\nLet\u0026rsquo;s add such a new operation into the Protobuf file by updating the schema and adding the following elements:\nschema { query: Query mutation: Mutation } type NewPet { name: String! color: String! } type Mutation { createPet(newPet: NewPet!): Pet } You can then import the updated GraphQL Schema file into Microcks using the upload dialog but without ticking the box as we want to update our service definition and not simply add test data. You can check the updated result:\nAs said above, we want to define a smart mock with some logic. Thankfully, Microcks has this ability to generate dynamic mock content. When defining our example in the Postman Collection, we\u0026rsquo;re are going to use three specific notations that are:\n{{ randomInt(5,10) }} for asking Microcks to generate a random integer between 5 and 10 for us (remember: the other pets have ids going from 1 to 4), {{ request.body/variables/newPet/name }} for asking Microcks to reuse here the name property provided as a variable in the request body. {{ request.body/variables/newPet/color }} for asking Microcks to reuse here the color property provided as a variable in the request body. Simply. Here\u0026rsquo;s the final snippet of the response body you may want to copy/paste:\n{ \u0026#34;data\u0026#34;: { \u0026#34;createPet\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;{{ randomInt(5,10) }}\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;{{ request.body/variables/newPet/name }}\u0026#34;, \u0026#34;color\u0026#34;: \u0026#34;{{ request.body/variables/newPet/color }}\u0026#34; } } } Let\u0026rsquo;s complete our Postman Collection with a new request for the new createPet method and a new example named new pet:\n🚨 Take care of saving your edits before exporting!\nImport this updated Postman Collection back in Microcks - this time you need to tick the box - and verify the results:\nLet\u0026rsquo;s now finally test this new method using some content and see what\u0026rsquo;s going on:\n$ echo \u0026#39;{ \u0026#34;query\u0026#34;: \u0026#34;mutation createPet($newPet: NewPet) { createPet(review: $newPet) { id name color } }\u0026#34;, \u0026#34;variables\u0026#34;: { \u0026#34;newPet\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Rusty\u0026#34;, \u0026#34;color\u0026#34;: \u0026#34;harlequin\u0026#34; } } }\u0026#39; | tr -d \u0026#39;\\n\u0026#39; | curl \\ -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -s -d @- \\ http://localhost:8585/graphql/Petstore+Graph+API/1.0 { \u0026#34;data\u0026#34;:{ \u0026#34;createPet\u0026#34;:{ \u0026#34;id\u0026#34;:\u0026#34;5\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Rusty\u0026#34;, \u0026#34;color\u0026#34;:\u0026#34;harlequin\u0026#34; } } } As a result we\u0026rsquo;ve got our pet name Rusty being returned with a new id being generated. Ta Dam! 🥳\n🛠️ As a validation, send a few more requests changing your pet name. You\u0026rsquo;ll check that given name is always returned and the id is actual random. But you can also go further by defining an advanced dispatcher that will inspect your request variables content to decide which response must be sent back. Very useful to describe different creation or error cases!\nWrap-up In this tutorial we have seen the basics on how Microcks can be used to mock responses of a GraphQL API. We introduced some Microcks concepts like examples, dispatchers and templating features that are used to produce a live simulation. This definitely helps speeding-up the feedback loop on the ongoing design as the development of a consumer using this API.\nThanks for reading and let us know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/first-grpc-mock/","title":"Your 1st gRPC mock","description":"","searchKeyword":"","content":"Overview This tutorial is a step-by-step walkthrough on how to use a gRPC / Protocol Buffers specification to get mocks for your gRPC Service. This is hands-on introduction to gRPC Conventions reference that brings all details on conventions being used.\nWe will go through a practical example based on the famous PetStore API. We’ll build the reference petstore-v1.proto file by iterations, highlighting the details to get you starting with mocking gRPC on Microcks.\nTo complete this tutorial, you will need this two additional tools:\nPostman to define sample data that will be used by your mocks, grpcurl to interact with and check your mocks are working as expected (this is optional as you can also do this using Postman but I prefer the command line 😉) Ready? Go! 💥\n1. Setup Microcks and Protobuf skeleton First mandatory step is obviously to setup Microcks 😉. For gRPC usage, we don\u0026rsquo;t need any particular setup and the simple docker way of deploying Microcks as exposed in Getting started is perfectly suited.\nRun this command below to get your Microcks instance ready:\ndocker run -p 8585:8080 -p 8686:9090 -it --rm quay.io/microcks/microcks-uber:latest-native This could be on other ports if 8585 or 8686 are already used on your machine.\nFollowing the getting started, you should have a Microcks running instance on http://localhost:8585 with a gRPC server available on localhost:8686.\nNow let\u0026rsquo;s start with the skeleton of our Protobuf contract for the Petstore Service. We\u0026rsquo;ll start with the definition of three different messages:\nPet is the data structure that represents a registered pet in our store - it has an id and a name, PetsResponse is a structure that allows returning many pets as a service method result, AllPetsRequest is an empty structure that represents the input type of our first method. We also have the definition of one getPets() method that allow returning all the pets in the store. This is over-simplistic but enough to help demonstrate how to do things. Here\u0026rsquo;s the protobuffer contract:\nsyntax = \u0026#34;proto3\u0026#34;; package org.acme.petstore.v1; message Pet { int32 id = 1; string name = 2; } message AllPetsRequest {} message PetsResponse { repeated Pet pets = 1; } service PetstoreService { rpc getPets(AllPetsRequest) returns (PetsResponse); } From now, you can save this as a file on your disk, then go to the Importers page in the left navigation menu and choose to Upload this file. The file should import correctly and you should receive a toast notifiation on the upper right corner. Then, while browsing APIs | Services, you should get acess to the following details in Microcks:\n2. Specifying mock data with Postman We have loaded a gRPC / Protobuf definition in Microcks that correctly discovered the structure of your service, but you have no sample data loaded at the moment. We\u0026rsquo;re going to fix this using Postman and create a Collection to hold our mock data.\nIn your Postman Workspace, start creating a new standard and empty Collection. As one of our conventions, your Collection must have the full name of your gRPC Service: org.acme.petstore.v1.PetstoreService. The documentation summary you put in the Collection must also start with version=v1 like illustrated below:\nHaving the same name and the same version in the Postman Collection is very important as it will allow Microcks to merge this information with the one from the Protobuf file.\n🤔 You may wonder the origin of this v1 version? It\u0026rsquo;s another convention that follows gRPC versioning best practices. As there\u0026rsquo;s no pre-defined way to specify the version of a Protobuf file, the community agreed that the last part of package name will be the version. Microcks has extracted this information from org.acme.petstore.v1. Read more on the gRPC conventions Microcks is following.\nFrom now, we will use this Collection to specify sample data for our mock. This is a three step process that is illustrated below in the slider (you can the blue dots to freeze the swiper below):\n1️⃣ Add a new Request named getPets. Change this request to be a POST request and update its URL to http:///getPets. This will ensure Microcks will asoociate it to the correct gRPC method,\n2️⃣ On this request, add a new example with the name of your choice. Edit this example to put an empty object as the request body ({}) and a list of Pets as the result body. Your can copy/paste the JSON snippet below:\n{ \u0026#34;pets\u0026#34;: [ { \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Zaza\u0026#34; }, { \u0026#34;id\u0026#34;: 2, \u0026#34;name\u0026#34;: \u0026#34;Tigress\u0026#34; }, { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] } 3️⃣ Finally, export your Collection to a local file with the name of your choice. You can find ours in the PetstoreService.postman.json file.\n🚨 Take care of saving your edits before exporting!\n3. Basic operation of gRPC service It\u0026rsquo;s now the moment to import this Postman Collection back in Microcks and see the results! Go to the Importers page in the left navigation menu and choose to Upload this file. Proceed with care because this time you need to tick the box telling Microcks to consider the Collection as a Secondary Artifact like below:\nYour gRPC service details should now have been updated with the samples you provided via the Postman Collection:\n🤔 You may have noticed in the above screenshot that dispatching properties are empty for now. This is normal as we\u0026rsquo;re on a basic operation with no routing logic. We\u0026rsquo;ll talk about dispatchers in next section.\nMicrocks has found All Pets as a valid sample to build a simulation upon. A mock URL has been made available but remember that in our case, we exposed the gRPC port to 8686. We can use this to test the service method as demonstrated below with a grpcurl command:\n$ grpcurl -plaintext -d \u0026#39;{}\u0026#39; localhost:8686 org.acme.petstore.v1.PetstoreService/getPets { \u0026#34;pets\u0026#34;: [ { \u0026#34;id\u0026#34;: 1, \u0026#34;name\u0026#34;: \u0026#34;Zaza\u0026#34; }, { \u0026#34;id\u0026#34;: 2, \u0026#34;name\u0026#34;: \u0026#34;Tigress\u0026#34; }, { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] } This is your first gRPC mock 🎉 Nice achievement!\n4. Using request arguments in gRPC method Let\u0026rsquo;s make things a bit more spicy by adding request arguments. Now assume we want to provide a simple searching method to retrieve all pets in store using simple filter. We\u0026rsquo;ll end up adding a new searchPets() method in our service. Of course, we\u0026rsquo;ll have to define a new PetSearchRequest input message so that users will specify name=zoe to get all the pets having zoe in name.\nSo we\u0026rsquo;ll add new elements in our Protobuf document like below: a new message and we complete the service with a new rpc method:\nmessage PetSearchRequest { string name = 1; } service PetstoreService { rpc getPets(AllPetsRequest) returns (PetsResponse); rpc searchPets(PetSearchRequest) returns (PetsResponse); } You can then import the updated Protobuf file into Microcks using the upload dialog but without ticking the box as we want to update our service definition and not simply add test data. You can check the updated result:\nWhat about the dispatcher property we mentioned earlier? You can see that it now have the QUERY_ARG value. Because of the presence of arguments in the new method definition, Microcks has inferred a routing logic based on this argument. If you get access to the operation details, you\u0026rsquo;ll see that the associated rule is name. Microcks will use the name to route incoming gRPC request.\nLet\u0026rsquo;s complete our Postman Collection with a new request for the new searchPets method and a new example for searching for pets having a k in their name:\n🚨 Take care of saving your edits before exporting!\nImport this updated Postman Collection back in Microcks - this time you need to tick the box - and see the results:\nLet\u0026rsquo;s try the new gRPC method mock with this command:\n$ grpcurl -plaintext -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;k\u0026#34;}\u0026#39; localhost:8686 org.acme.petstore.v1.PetstoreService/searchPets { \u0026#34;pets\u0026#34;: [ { \u0026#34;id\u0026#34;: 3, \u0026#34;name\u0026#34;: \u0026#34;Maki\u0026#34; }, { \u0026#34;id\u0026#34;: 4, \u0026#34;name\u0026#34;: \u0026#34;Toufik\u0026#34; } ] } 🎉 Fantastic! We now have a mock with routing logic based on request arguments.\n💡 Microcks dispatcher can support multiple arguments to find appropriate response to an incoming request. In that case, the dispatcher rule will have the form of arg_1 \u0026amp;\u0026amp; arg_2 \u0026amp;\u0026amp; arg_3.\n🛠️ As an exercice to validate your understanding, just add a new i pets sample so that when user specify a filter with value i, the 3 correct cats are returned (Tigresse, Maki and Toufik)\n5. Mocking a creation operation And now the final step! Let\u0026rsquo;s deal with a new method that allows registering a new pet within the Petstore. For that, you\u0026rsquo;ll typically have to define a new createPet() method on the PetstoreService. In order to be meaningful to the user of this operation, a mock would have to integrate some logic that reuse contents from the incoming request and/or generate sample data. That\u0026rsquo;s typically what we\u0026rsquo;re going to do in this last section 😉\nLet\u0026rsquo;s add such a new operation into the Protobuf file by adding the following elements:\nmessage PetNameRequest { string name = 1; } service PetstoreService { rpc getPets(AllPetsRequest) returns (PetsResponse); rpc searchPets(PetSearchRequest) returns (PetsResponse); rpc createPet(PetNameRequest) returns (Pet); } You can then import the updated Protobuf file into Microcks using the upload dialog but without ticking the box as we want to update our service definition and not simply add test data. You can check the updated result:\nAs said above, we want to define a smart mock with some logic. Thankfully, Microcks has this ability to generate dynamic mock content. When defining our example in the Postman Collection, we\u0026rsquo;re are going to use two specific notations that are:\n{{ randomInt(5,10) }} for asking Microcks to generate a random integer between 5 and 10 for us (remember: the other pets have ids going from 1 to 4), {{ request.body/name }} for asking Microcks to reuse here the name property of the request body. Simply. Let\u0026rsquo;s complete our Postman Collection with a new request for the new createPet method and a new example named new pet:\n🚨 Take care of saving your edits before exporting!\nImport this updated Postman Collection back in Microcks - this time you need to tick the box - and verify the results:\nLet\u0026rsquo;s now finally test this new method using some content and see what\u0026rsquo;s going on:\n$ grpcurl -plaintext -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;Rusty\u0026#34;}\u0026#39; localhost:8686 org.acme.petstore.v1.PetstoreService/createPet { \u0026#34;id\u0026#34;: 6, \u0026#34;name\u0026#34;: \u0026#34;Rusty\u0026#34; } As a result we\u0026rsquo;ve got our pet name Rusty being returned with a new id being generated. Ta Dam! 🥳\n🛠️ As a validation, send a few more requests changing your pet name. You\u0026rsquo;ll check that given name is always returned and the id is actual random. But you can also go further by defining an advanced dispatcher that will inspect your request body content to decide which response must be sent back. Very useful to describe different creation or error cases!\nWrap-up In this tutorial we have seen the basics on how Microcks can be used to mock responses of a gRPC service. We introduced some Microcks concepts like examples, dispatchers and templating features that are used to produce a live simulation. This definitely helps speeding-up the feedback loop on the ongoing design as the development of a consumer using this service.\nThanks for reading and let us know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/","title":"Explanations","description":"Here below all the documentation pages related to **Explanations**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/guides/integration/","title":"Integration","description":"Here below all the guides related to **Integration**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/references/configuration/operator-config/","title":"Operator Configuration","description":"","searchKeyword":"","content":"Introduction Operators are next-gen installer, maintainer and life-cycle manager for Kubernetes native applications. Operators are a Kubernetes native piece of software (aka Kube controller) that manages specific Custom Resources defining their domain of expertise. Microcks provide an Operator that was developed using the Operator Framework SDK and that is distributed via OperatorHub.io.\nMicrocks project currently proposes two operator with different maturity:\nThe Ansible-based Operator is the legacy one. It is production release and currently distributed via OperatorHub.io The Quarkus-based Operator which is an ongoing effort in active development for providing a more robust, scalable and fezature-rich operator in the future. Ansible-based Operator Microcks Ansible Operator only defines one customer resource that is called the MicrocksInstall: a description of the instance configuration you want to deploy. The properties of this custom resource are briefly described below.\nThis operator is scoped to the namespace, you can easily install it in your namespace using:\nkubectl apply -f https://microcks.io/operator/operator-latest.yaml -n microcks or:\nkubectl apply -f https://microcks.io/operator/operator-1.9.0.yaml -n microcks CustomResource Reference For full instructions and deployment options, we recommend reading the README on the GitHub repository.\nOption 1: Minimal features This below represent a minimalistic MicrocksInstall custom resource:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: name: microcks version: \u0026#34;1.9.1\u0026#34; microcks: url: microcks.192.168.99.100.nip.io keycloak: url: keycloak.192.168.99.100.nip.io privateUrl: http://microcks-keycloak.microcks.svc.cluster.local:8080 Option 2: Full features Here\u0026rsquo;s now a more complex MicrocksInstall CRD that can be use to configure Ingress secrets and certificates, replicas, enable Async API support, etc\u0026hellip;\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: name: microcks version: \u0026#34;1.9.1\u0026#34; microcks: replicas: 4 url: microcks.192.168.99.100.nip.io ingressSecretRef: my-secret-for-microcks-ingress postman: replicas: 2 keycloak: install: true persistent: true volumeSize: 1Gi url: keycloak.192.168.99.100.nip.io privateUrl: http://microcks-keycloak.microcks.svc.cluster.local:8080 ingressSecretRef: my-secret-for-keycloak-ingress mongodb: install: true uri: mongodb:27017 database: sampledb secretRef: secret: mongodb usernameKey: database-user passwordKey: database-password persistent: true volumeSize: 2Gi features: async: enabled: true defaultBinding: KAFKA defaultFrequency: 10 kafka: install: true url: 192.168.99.100.nip.io repositoryFilter: enabled: true labelKey: app labelLabel: Application labelList: app,status The installation process is demonstrated in following video that also demonstrates AsyncAPI mocking features:\nQuarkus-based Operator This is an oinging effort under active development. Please check the README on the GitHub repository for latest information.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/test-endpoints/","title":"Test Parameters","description":"","searchKeyword":"","content":"Introduction From the page displaying basic information on your API or Service mocks, you have the ability to launch new tests against different endpoints that may be representing different environment into your development process. Hitting the NEW TEST\u0026hellip; button, leads you to the following form where you will be able to specify a target URL for the test, as well as a Runner—a testing strategy for your new launch:\nThis reference documentation walks you through the different parameters available when launching a new test on Microcks. All the parameters mentioned below are available whether you\u0026rsquo;re launching a Test via the Web UI, via the API, via the CLI or any other libraries.\nService under test Service under test is simply the reference of the API/Service specification we use as a reference for this test. This a couple of Service Name and Service Version. Depending on the Runner you choose, Microcks while reuse the information of an Artifact attached to this Service name and version.\nTest Endpoint The Test Endpoint is simply a URI where a deployed component is providing an endpoint implementing your API specification. In the testing literature, this is usually defined as the URI of the System Under Test.\nDepending on your API/Service type and the protocol binding you want to connect with (especially for event-based APIs), Test Endpoints mays have different specific syntax. Please jump to the Endpoints syntax section on this page to learn more.\nTest Runner Microcks offers different strategies for running tests on endpoints where our microservice being developed are deployed. We recommend having a read at our explanations on Conformance Testing. Such strategies are implemented as Test Runners. Here are the default Test Runners available within Microcks:\nTest Runner API \u0026amp; Service Types Description HTTP REST and SOAP Simplest test runner that only checks that valid target endpoints are deployed and available - it means return a 20x or 404 Http status code when appropriated. This can be called a simple \u0026ldquo;smock test\u0026rdquo;. SOAP SOAP Extension of HTTP Runner that also checks that the response is syntactically valid regarding SOAP WebService contract. It realizes a validation of the response payload using XSD schemas associated to service. SOAP_UI REST and SOAP When the API artifact is defined using SoapUI: ensures that assertions put into SoapUI Test cases are checked valid. Report failures otherwise. POSTMAN REST, SOAP and GRAPHQL When the API artifact is defined using Postman: executes test scripts as specified within a Postman Collection. Report failures otherwise. OPEN_API_SCHEMA REST When the API artifact is defined using Open API: it executes example requests and check that results have the expected Http status and that payload is compliant with OpenAPI schema specified into OpenAPI specification. Report failures otherwise. ASYNC_API_SCHEMA EVENT When the API artifact is defined using Async API: it connects to specified broker endpoints, consume messages and check that payload is compliant with AsyncAPI schema specified into AsyncAPI specification. Report failures otherwise. GRPC_PROTOBUF GRPC When the API artifact is defined using gRPC/Protobuf: it executes example requests and check that results payload is compliant with Protocol Buffer schema specified into gRPC protobuffer file. Report failures otherwise. GRAPHQL_SCHEMA GRAPHQL When the API is of type GraphQL: it executes example requests and check that results payload is compliant with the GraphQL Schema of the API. Report failures otherwise. Operations Depending on the Test your are running, you may want to filter the list of operations that will be actually tested. By default, all operations are included in the test but you can pick and choose the one you want.\n💡 When running a Test on an Event-baed API using the ASYNC_API_SCHEMA strategy, you will have to choose one and only one operation at a time. This is because Async endpoints may be different for each and every operation so a Microcks tests can just include one Async operation.\nTimeout Depending on the type of Service or Tests you are running, the specification of a Timeout maybe mandatory. This is a numerical value expressed in milliseconds.\nSecret Depending on the Test Endpoint you are connecting to, you may need additional authentication information - like credentials or custom X509 Certificates. You may reuse an Authentication Secret that has been made available in the Microcks installation by your administrator.\nOAuth2 If the secured Test Endpoint cannot be accessed using a static Authentication Secret, Microcks is able to handle an OAuth2 / OpenID Connect authentication flow as the Tests prerequisites in order to retrieve an ephemeral bearer token.\nThe supported Oauth2 grant types are Client credentials, Refresh token and Password. For each of this authentication flow, you will have to provide additional information like:\nThe OAuth2 Token URI: a URL that will be used for token retrieval, The Client Id: the OAuth2 client identifier, The Client Secret: the OAuth2 secret, The Scopes: the optional OAuth2 scopes you need (openid is always included). Additionally, you will have to provide a Refresh Token when using the Refresh token grant type 😉\nHeaders Override This optional parameter allows you to add/override requests headers with global or operation specific ones. You have to use a comma-separated string for multiple values corresponding to the same header.\nEndpoints syntax HTTP based APIs For HTTP based APIs (REST, SOAP, GraphQL or gRPC), this is a simple URL that should respect following pattern:\nhttp[s]://{service.endpoint.url:port}[/{service.path}] The /{service.path} may be optioanl if your target API is deployed on the root context.\nEvent based APIs For Event based API through Async API testing, pattern is depending on the protocole binding you\u0026rsquo;d like to test.\nKafka Kafka Test Endpoint have the following form with optional parameters placed just after a ? and separated using \u0026amp; character:\nkafka://{kafka.broker.url:port}/{kafka.topic.name}[?param1=value1\u0026amp;param2=value2] Optional Params Description registryUrl The URL of schema registry that is associated to the tested topic. This parameter is required when using and testing Avro encoded messages. registryUsername The username used if access to the registry is secured. registryAuthCredSource The source for authentication credentials if any. Valid values are just USER_INFO. As an example, you may have this kind of Test Endpoint value: kafka://mybroker.example.com:443/test-topic?registryUrl=https://schema-registry.example.com\u0026amp;registryUsername=fred:letmein\u0026amp;registryAuthCredSource=USER_INFO\nMQTT MQTT Test Endpoint have the following form with no optional parameters:\nmqtt://{mqtt.broker.url:port}/{mqtt.topic.name} AMQP AMQP 0.9.1 Test Endpoint have the following form with optional parameters placed just after a ? and separated using \u0026amp; character:\namqp://{amqp.broker.url:port}/[{amqp.vhost}/]{amqp.destination.type}/{amqp.destination.name}[?param1=value1\u0026amp;param2=value2] amqp.destination.type is used to specify if we shoulf connect to either a queue (use the q value) or an exchange speciyfing its type: d dor direct, f for fanout, t for topic, h for headers. Then you have to specify either the queue or exchange name in amqp.detaintion.name.\nDepending on the type of destination, you will need additional optional parameters as specified below:\nOptional Params Description routingKey Used to specify a routing key for direct or topic exchanges. If not specified the * wildcard is used. durable Flag telling if exchange to connect to is durable or not. Default is false. h.{header} A bunch of headers where name starts with h. in order to deal with headers exchange. The x-match property is set to anyto gather the most message as possible. As an example, you may have this kind of Test Endpoint values: amqp://rabbitmq.example.com:5672/h/my-exchange-headers?h.h1=h1\u0026amp;h.h2=h2 or amqp://rabbitmq.example.com:5672/my-vhost/t/my-exchange-topic?routingKey=foo\nWebSocket WebSocket Test Endpoint have the following form with no optional parameters\nws://{ws.endpoint.url:port}/{channel.name} NATS NATS Test Endpoint have the following form with no optional parameters:\nnats://{nats.endpoint.url:port}/{queue-or-subject.name} Google PubSub Google PubSub Test Endpoint have the following form with no optional parameters:\ngooglepubsub://{google-platform-project.name}/{topic.name} Amazon SQS Amazon Simple Queue Service Test Endpoint have the following form with optional parameters placed just after a ? and separated using \u0026amp; character:\nsqs://{aws.region}/{sqs.queue.name}[?param1=value1] Optional Params Description overrideUrl The AWS endpoint override URI used for API calls. Handy for using SQS via LocalStack Amazon SNS Amazon Simple Notification Service Test Endpoint have the following form with optional parameters placed just after a ? and separated using \u0026amp; character:\nsns://{aws.region}/{sns.topic.name}[?param1=value1] Optional Params Description overrideUrl The AWS endpoint override URI used for API calls. Handy for using SNS via LocalStack "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/rabbitmq-support/","title":"RabbitMQ Mocking & Testing","description":"","searchKeyword":"","content":"Overview This guide shows you how to use the RabbitMQ protocol with Microcks. RabbitMQ is one of the most popular open source message broker that is supporting different protocols and more specifically AMQP 0.9.1, RabbitMQ was originally developed for.\nMicrocks supports RabbitMQ/AMQP as a protocol binding for AsyncAPI. That means that Microcks is able to connect to a RabbitMQ broker for publishing mock messages as soon as it receives a valid AsyncAPI Specification and to connect to any RabbitMQ broker in your organization to check that flowing messages are compliant to the schema described within your specification.\nLet\u0026rsquo;s start! 🚀\n1. Setup RabbitMQ broker connection First mandatory step here is to setup Microcks so that it will be able to connect to a RabbitMQ broker for sending mock messages. Microcks has been tested successfully with RabbitMQ version 3.9.13. It can be deployed as containerized workload on your Kubernetes cluster. Microcks does not provide any installation scripts or procedures ; please refer to projects or related products documentation.\nIf you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] amqp: url: rabbitmq-broker.app.example.com:5672 username: microcks password: microcks The async feature should of course be enabled and then the important things to notice are located in to the amqp block:\nurl is the hostname + port where broker can be reached by Microcks, username is simply the user to use for authenticating the connection, password represents this user credentials. If you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] amqp: url: rabbitmq-broker.app.example.com:5672 username: microcks password: microcks Actual connection to the RabbitMQ broker will only be made once Microcks will send mock messages to it. Let see below how to use AMQP binding with AsyncAPI.\n2. Use RabbitMQ in AsyncAPI As AMQP is not the default binding into Microcks, you should explicitly add it as a valid binding within your AsyncAPI contract. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use AMQP and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.1.0\u0026#39; info: title: Account Service [...] channels: user/signedup: bindings: amqp: is: routingKey exchange: name: signedup-exchange type: topic durable: true autoDelete: false vhost: / bindingVersion: 0.2.0 subscribe: message: $ref: \u0026#39;#/components/messages/UserSignedUp\u0026#39; [...] You\u0026rsquo;ll notice that we just have to add a amqp non empty block within the channel bindings. An amqp is either a a queue or routingKey. When choosing a routingKey you\u0026rsquo;re in fact describing an exchange that should be further typed as topic, direct, fanout or headers. See the full binding spec for details.\nAs usual, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification.\nasyncapi: \u0026#39;2.1.0\u0026#39; info: title: Account Service [...] channels: user/signedup: bindings: amqp: is: routingKey exchange: name: signedup-exchange type: topic durable: true autoDelete: false vhost: / bindingVersion: 0.2.0 subscribe: message: $ref: \u0026#39;#/components/messages/UserSignedUp\u0026#39; components: messages: UserSignedUp: payload: [...] examples: - name: Laurent payload: displayName: Laurent Broudoux email: [email protected] - name: Random payload: displayName: \u0026#39;{{randomFullName()}}\u0026#39; email: \u0026#39;{{randomEmail()}}\u0026#39; If you\u0026rsquo;re now yet accustomed to it, you may wonder what it this {{randomFullName()}} notation? These are just Templating functions that allow generation of dynamic content! 😉\nNow simply import your AsyncAPI file into Microcks either using a Direct upload import or by defining a Importer Job. Both methods are described in this page.\n3. Validate your mocks Now it’s time to validate that mock publication of messages on the connected broker is correct. In a real world scenario this mean developing a consuming script or application that connects to the topic where Microcks is publishing messages.\nFor our Account Service, we have such a consumer in one GitHub repository.\nFollow the following steps to retrieve it, install dependencies and check the Microcks mocks:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/amqpjs-client $ npm install $ node consumer.js amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-broker.app.example.com:5672 AccountService-1.1.0-user/signedup Connecting to amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-broker.app.example.com:5672 on topic AccountService-1.1.0-user/signedup { \u0026#34;displayName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34; } { \u0026#34;displayName\u0026#34;: \u0026#34;Marcela Langworth\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34; } [...] 🎉 Fantastic! We are receiving the two different messages corresponding to the two defined examples each and every 3 seconds that is the default publication frequency. You\u0026rsquo;ll notice that each displayName and email properties have a different value thanks to the templating notation.\n4. Run AsyncAPI tests Now the final step is to perform some test of the validation features in Microcks. As we will need API implementation for that it’s not as easy as writing HTTP based API implementation, we have some helpful scripts in our api-tooling GitHub repository. This scripts are made for working with the Account Service sample we used so far but feel free to adapt them for your own use.\nImagine that you want to validate messages from a QA environment with dedicated RabbitMQ broker. Still being in the amqpjs-client folder, now use the producer.js utility script to publish messages on a signedup-exchange topic. Our producer takes care of creating a non-durable exchange of type topic on RabbitMQ broker:\n$ node producer.js amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-qa-broker.app.example.com:5672 signedup-exchange topic Connecting to amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-qa-broker.app.example.com:5672 on destination signedup-exchange Publishing {\u0026#34;displayName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} Publishing {\u0026#34;displayName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} Publishing {\u0026#34;displayName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} [...] Do not interrupt the execution of the script for now.\nIf the QA broker access is secured - let\u0026rsquo;s say with credentials and custom certificates - we will first have to manage a Secret in Microcks to hold these informations. Within Microcks console, first go to the Administration section and the Secrets tab.\nAdministration and Secrets will only be available to people having the administrator role assigned. Please check this documentation for details.\nThe screenshot below illustrates the creation of such a secret for your QA RabbitMQ Broker with username and password.\nOnce saved we can go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: amqp://rabbitmq-qa-broker.app.example.com:5672/t/signedup-exchange that is referencing the AMQP broker endpoint, Operation: SUBSCRIBE user/signedup Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API, Timeout: Keep the default of 10 seconds, Secret: This is where you\u0026rsquo;ll select the QA RabbitMQ Broker you previously created. Launch the test and wait for some seconds and you should get access to the test results as illustrated below:\nYou may have noticed the /t/ path element in Test endpoint used above. You may be aware that RabbitMQ is supporting different kinds of Exchnages and /t/ is here to tell Microcks it should consider a topic. As an exercice, you can reuse our producer.js script above and replace with fanout, direct or headers. Respectively, you\u0026rsquo;ll have to replace /t/ with /f/, /d/ and /h/ to tell Microcks the expected type of Exchange.\nThis is fine and we can see that Microcks captured messages and validated them against the payload schema that is embedded into the AsyncAPI specification. In our sample, every property is required and message does not allow additionalProperties to be defined.\nSo now let see what happened if we tweak that a bit\u0026hellip; Open the producer.js script in your favorite editor to put comments on line 21 and to remove comments from line 22. It\u0026rsquo;s removing the displayName property and adding an unexpected name property as shown below after having restarted the producer:\n$ node producer.js amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-qa-broker.app.example.com:5672 signedup-exchange topic Connecting to amqp://\u0026lt;user\u0026gt;:\u0026lt;password\u0026gt;@rabbitmq-qa-broker.app.example.com:5672 on destination signedup-exchange Publishing {\u0026#34;name\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} Publishing {\u0026#34;name\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} Publishing {\u0026#34;name\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;} [...] Relaunch a new test and you should get results similar to those below:\n🥳 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application is sending garbage, Microcks will be able to spot this and inform you that the expected message format is not respected.\nWrap-Up In this guide we have seen how Microcks can also be used to send mock messages on a RabbitMQ Broker connected to the Microcks instance. This helps speeding-up the development of application consuming these messages. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected message format and the one effectively used by real-life producers.\nThanks for reading and let you know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/administration/snapshots/","title":"Snapshotting/restoring Repository","description":"","searchKeyword":"","content":"Overview This guide will teach you what are Microcks Snapshots and what are their use-case sweet spots. As an administrator, you will learn how to select the elements you would like to Snapshots and how to import a previous Snapshot to restore content.\n🚨 Prerequisites\nUsers can only be managed by Microcks admin - we mean people having the admin role assigned. In order to be able to retrieve the list of users and operate changes, the user should also have manage-users and manage-clients roles from realm-management Keycloak internal client. See Keycloak documentation for more on this point.\n1. Use-cases Microcks Snapshots are not complete database exports because they only integrate the Services \u0026amp; APIs definitions parts. As an example, they do not embed all the tests runs and analytics data.\n🚨 Warning\nSnapshots cannot be substitutes for proper database backup and restore procedures! If you choose to deploy Microcks as a central instance that should always up-and-runing, databases backups are necessary to keep all the history of different objects and retain the configuration of your instance.\nSnapshots are lightweight structures that can be used to:\neasily exchange a set of Services \u0026amp; APIs definition with another instance of Microcks, easily setup a new Microcks instance dedicated for mocking a functionnal subsystem - optionally with different configured response times for simulating a real behaviour, easily backup your instance if you do not bother loosing tests runs and analytics data Snapshots can only be managed by Microcks administrator - we mean people having the administrator role assigned. If you need further information on how to manage users and roles, please check here. Snapshots management is simply a thumbnail with the Administration page that is available from the vertical menu on the left once logged in as administrator.\n2. Create a Snapshot Snapshots management is simply a thumbnail with the Administration page that is available from the vertical menu on the left once logged in as administrator. Creating and exporting a new Snapshot is as simple as selecting the different API \u0026amp; Services you want to export and click the Export button on top right. See the capture below:\n💡 Be careful: the services list panel has limited height and is scrollable. If you have many services, you may not seen some of them at first sight.\nThe export allows you to download a JSON file called microcks-repository.json that embeds the foundationnal elements of a repository:\n{ \u0026#34;services\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;5dd5661d7afe58688acc7eff\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;API Pastry\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.1.0\u0026#34;, \u0026#34;xmlNS\u0026#34;: null, \u0026#34;type\u0026#34;: \u0026#34;REST\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;createdOn\u0026#34;: 1574266397964, \u0026#34;lastUpdate\u0026#34;: 1584877046174, \u0026#34;annotations\u0026#34;: null, \u0026#34;labels\u0026#34;: { \u0026#34;domain\u0026#34;: \u0026#34;pastry\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;GA\u0026#34; } }, \u0026#34;operations\u0026#34;: [ [...] ] }, [...] ], \u0026#34;resources\u0026#34;: [...], \u0026#34;requests\u0026#34;: [...], \u0026#34;responses\u0026#34;: [...] } 3. Restoring from Snasphot The opposite import operation can be easily done by uploading your Snapshot file and hitting the Import button 😉\nWrap-up Snapshots are lightweight structures that are really helpful to quickly share or reload API \u0026amp; Services definitions. They are very convenient to use for example when Developing with Testcontainers to ensure all your developers share the same third-parties API definitions.\nKeep in mind that Snapshots can also be exported and imported using Microcks REST API! 😉\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/grpc-conventions/","title":"gRPC Conventions","description":"","searchKeyword":"","content":"In order to use gRPC in Microcks, you will need two artifacts for each service definition as explained in Multi-artifacts support:\nA gRPC / Protocol Buffers file definition that holds the Service metadata and operations definitions, A Postman Collection file that holds the mock examples (requests and responses) for the different operations of the gRPC Service. Conventions In order to be correctly imported and understood by Microcks, your gRPC and Postman files should follow a little set of reasonable conventions and best practices.\nAs of today Microcks only supports proto3 syntax as it is now the default and encouraged version from gRPC community,\ngRPC doesn\u0026rsquo;t have the notion of Service version. In Microcks, this notion is critical and we will use the package information from the proto file to compute a version.\nFor package names containing more than 2 path levels, we\u0026rsquo;ll extract the last one as being the version. So package io.github.microcks.grpc.hello.v1; will produce version v1 We\u0026rsquo;ll keep unchanged shorter package named, so package com.acme; will produce version com.acme that is not very unique 😞. So be sure to follow gRPC versioning best practices! Your Postman collection description will need to have a name that matches the gRPC service name and a custom property version that matches the above computed version,\nYour Postman collection will need to organize examples into requests having the same name and url as the gRPC methods,\nYour Postman collection will hold examples defined in JSON as JSON is a textual format easier to use than binary Protobuf 😅\nWe recommend having a look at our sample gRPC for HelloService as well as the companion Postman collection to fully understand and see those conventions in action.\nDispatchers gRPC service mocks in Microcks supports 4 different types of dispatcher:\nempty dispatcher means that Microcks will pick the first available response of operation, QUERY_ARGS dispatcher can be inferred automatically at import time. It is used for dispatching based on the content of the gRPC Request if this one is made of Protobuff scalar types (string, integer, boolean, float, \u0026hellip;) excepted bytes, JSON_BODY dispatcher can be used for dispatching based on the content of the complete gRPC Request body translated in JSON, SCRIPT dispatcher can be used for dispatching based on the content of the complete gRPC Request body translated in JSON. Illustration Let\u0026rsquo;s dive in details of our sample io.github.microcks.grpc.hello.v1.HelloService gRPC service.\nSpecifying Service structure This is a fairly trivial gRPC Service that just greet new comers. You can see below the definition found in hello-v1.proto.\nsyntax = \u0026#34;proto3\u0026#34;; package io.github.microcks.grpc.hello.v1; option java_multiple_files = true; message HelloRequest { string firstname = 1; string lastname = 2; } message HelloResponse { string greeting = 1; } service HelloService { rpc greeting(HelloRequest) returns (HelloResponse); } Considering the package of this proto file, when imported into Microcks, it will discover the io.github.microcks.grpc.hello.v1.HelloService service with version v1 and the unique operation greeting.\nSpecifying Service examples Specification of examples is done using a Postman Collection as examples cannot be attached to main proto file and thanks multi-artifacts support feature.\nUsing Postman, just create a new Collection - using the same name as gRPC Service and adding the custom property version at the beginning of description like illustrated below:\nYou can now start organizing and creating requests that are matching with the gRPC service method name. For our example, we\u0026rsquo;re specifying the greeting request for the greeting gRPC method.\nThe next step is now to create a bunch of examples for each of the requests/operations of your Collection as explained by the Postman documentation. You\u0026rsquo;ll give each example a meaningful name regarding the use-case it is supposed to represent. Example url should also match with the name of the gRPC method; here we have a simple http:///greeting.\nDefining dispatch rules If the default inferred dispatchers don\u0026rsquo;t match with your use-case, you\u0026rsquo;ll need an additional step for assembling data coming from gRPC Protofile and Postman Collection is to define how to dispatch requests. For gRPC, your can typically use a JSON_BODY or a SCRIPT dispatcher as mentionned above.\nYou can use a Metadata artifact for that or directly edit the dispatcher in the Web UI. Here-after we have defined a simple rule that is routing incoming requests depending on the value of the firstname property of the incoming message.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/metadada/","title":"API Metadata Format","description":"","searchKeyword":"","content":"Introduction Some Microcks mocks specific metadata or properties cannot be fully deduced from common attributes coming from OpenAPI or AsyncAPI. Thus we should rely on default values can be later overwritten by manager within Microcks either using the UI or through the Microcks API.\n💡 For OpenAPI and AsyncAPI, we introduced OpenAPI extensions and AsyncAPI extensions to allow providing this informations using x-microcks properties.\nBut sometimes you don\u0026rsquo;t want to add some x-microcks extensions attributes into AsyncAPI / OpenAPI document OR you\u0026rsquo;d need to specify these metadata and properties for some other artifact types like Protobuf + Postman Collection for GRPC mocking for instance 😉.\nHence we propose defining these metadata and properties into a standalone document called an APIMetadata ; document that can imported as a secondary artifact thanks to the Multi-Artifacts support.\n💡 For the later gRPC use-case, it means that the Defining dispatch rules step can be done automatically by importing another artifact that lives right next your files in Git repo.\nFor ease of use, we provide a JSON Schema that you can download here. Thus, you can integrate it in your code editor and benefit from code completion and validation.\nAPIMetadata documents are intended to be imported as secondary artifacts only ; thanks to the Multi-Artifacts support.\nAPI Metadata properties Let start with an example! Here\u0026rsquo;s below an illustration of what could such an APIMetadata document for one API. If you\u0026rsquo;re a reader of the Microcks Blog, you\u0026rsquo;ll notice this sample API with custom dispatching rules was introduced into the Advanced Dispatching and Constraints for mocks post.\napiVersion: mocks.microcks.io/v1alpha1 kind: APIMetadata metadata: name: WeatherForecast API version: 1.1.0 labels: domain: weather status: GA team: Team C operations: \u0026#39;GET /forecast/{region}\u0026#39;: delay: 100 dispatcher: FALLBACK dispatcherRules: |- { \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;region\u0026#34;, \u0026#34;fallback\u0026#34;: \u0026#34;Unknown\u0026#34; } This example is pretty straightforward to understand and explain:\nThis document is related to the WeatherForecast API in version 1.1.0. That means that this API version should already exist into your repository, otherwise the document will be ignored, This document specifies additional labels used for Organizing the Microcks repository. This labels will be added to existing ones, This document specifies a default delay as well as custom dispatching informations for our GET operation. The name of operation should perfectly match the name of an existing operation - whether defined through OpenAPI, AsyncAPI, Postman Collection, SoapUI Project or Protobuf definition - otherwise it will be ignored. 💡 Note that we can use multi-line notation in YAML but we will have to escape everything and put \\ before double-quotes and \\n characters if specified using JSON.\nThe semantic of those attributes are exactly the same that the one introduced into OpenAPI extensions and AsyncAPI extensions.\nStarting with Microcks 1.11.0, you can also declare mock constraints into your APIMetadata file:\napiVersion: mocks.microcks.io/v1alpha1 kind: APIMetadata metadata: name: WeatherForecast API version: 1.1.0 labels: domain: weather status: GA team: Team C operations: \u0026#39;GET /forecast/{region}\u0026#39;: delay: 100 parameterConstraints: - name: Authorization in: header required: true recopy: false mustMatchRegexp: \u0026#34;^Bearer\\\\s[a-zA-Z0-9\\\\._-]+$\u0026#34; - name: x-request-id in: header required: true recopy: true Importing API Metadata When you\u0026rsquo;re happy with your API Metadata just put the result YAML or JSON file into your favorite Source Configuration Management tool, grab the URL of the file corresponding to the branch you want to use and add it as a regular Job import into Microcks. On import, Microcks should detect that it\u0026rsquo;s an APIMetadata specification file and choose the correct importer.\nUsing a Hello GRPC metadata example here, you should get the following screen. Do not forget to tick the Secondary Artifact checkbox!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/minikube-helm/","title":"On Minikube with Helm","description":"","searchKeyword":"","content":"Overview This guide will walk you through the different steps of running a full Microcks installation on your laptop using Minikube. The step #4 is actually optional and may only be of interest if you\u0026rsquo;d like to use Asynchronous features of Microcks.\nThe installation notes were ran on an Apple Mac book M2 but those steps would sensibly be the same on any Linux machine.\nLet\u0026rsquo;s go 🚀\n1. Preparation As being on a Macr, people usually use brew to install minikube. However, it is also available from several different package managers out there. You can also check the Getting Started guide to access direct binary downloads. Obviously, you\u0026rsquo;ll also need the kubectl utility to interact with your cluster.\n$ brew install minikube $ minikube version minikube version: v1.32.0 commit: 8220a6eb95f0a4d75f7f2d7b14cef975f050512d We use the basic, default configuration of minikube coming with the docker driver:\n$ minikube config view - driver: docker 2. Start and configure a cluster We\u0026rsquo;re now going to start a Kube cluster. Start your minikube cluster with the defaults.\nThe default locale of commands below is French, but you\u0026rsquo;ll easily translate to your own language thanks to the nice emojis on the beginning of lines 😉\n$ minikube start --- OUTPUT --- 😄 minikube v1.32.0 sur Darwin 14.1.2 (arm64) 🎉 minikube 1.33.1 est disponible ! Téléchargez-le ici : https://github.com/kubernetes/minikube/releases/tag/v1.33.1 💡 Pour désactiver cette notification, exécutez : \u0026#39;minikube config set WantUpdateNotification false\u0026#39; ✨ Utilisation du pilote docker basé sur le profil existant 👍 Démarrage du noeud de plan de contrôle minikube dans le cluster minikube 🚜 Extraction de l\u0026#39;image de base... 🔄 Redémarrage du docker container existant pour \u0026#34;minikube\u0026#34; ... 🐳 Préparation de Kubernetes v1.28.3 sur Docker 24.0.7... 🔗 Configuration de bridge CNI (Container Networking Interface)... 🔎 Vérification des composants Kubernetes... 💡 Après que le module est activé, veuiller exécuter \u0026#34;minikube tunnel\u0026#34; et vos ressources ingress seront disponibles à \u0026#34;127.0.0.1\u0026#34; ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 ▪ Utilisation de l\u0026#39;image gcr.io/k8s-minikube/storage-provisioner:v5 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/controller:v1.9.4 ▪ Utilisation de l\u0026#39;image docker.io/kubernetesui/dashboard:v2.7.0 ▪ Utilisation de l\u0026#39;image docker.io/kubernetesui/metrics-scraper:v1.0.8 🔎 Vérification du module ingress... 💡 Certaines fonctionnalités du tableau de bord nécessitent le module metrics-server. Pour activer toutes les fonctionnalités, veuillez exécuter : minikube addons enable metrics-server\t🌟 Modules activés: storage-provisioner, default-storageclass, dashboard, ingress 🏄 Terminé ! kubectl est maintenant configuré pour utiliser \u0026#34;minikube\u0026#34; cluster et espace de noms \u0026#34;default\u0026#34; par défaut. You need to enable the ingress add-on if not already set by default:\n$ minikube addons enable ingress --- OUTPUT --- 💡 ingress est un addon maintenu par Kubernetes. Pour toute question, contactez minikube sur GitHub. Vous pouvez consulter la liste des mainteneurs de minikube sur : https://github.com/kubernetes/minikube/blob/master/OWNERS 💡 Après que le module est activé, veuiller exécuter \u0026#34;minikube tunnel\u0026#34; et vos ressources ingress seront disponibles à \u0026#34;127.0.0.1\u0026#34; ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/controller:v1.9.4 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 ▪ Utilisation de l\u0026#39;image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0 🔎 Vérification du module ingress... 🌟 Le module \u0026#39;ingress\u0026#39; est activé You can check connection to the cluster and that Ingresses are OK running the following command:\n$ kubectl get pods -n ingress-nginx --- OUTPUT --- NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-dz95x 0/1 Completed 0 26m ingress-nginx-admission-patch-5bjwv 0/1 Completed 1 26m ingress-nginx-controller-b6894599f-pml9s 1/1 Running 0 26m 3. Install Microcks with default options We\u0026rsquo;re now going to install Microcks with basic options. We\u0026rsquo;ll do that using the Helm Chart so you\u0026rsquo;ll also need the helm binary. You can use brew install helm on Mac for that.\nThen, we\u0026rsquo;ll need to prepare the /etc/hosts file to access Microcks using an Ingress. Add the line containing microcks.m.minikube.local address. You need to declare 2 host names for both Microcks and Keycloak.\n$ cat /etc/hosts --- OUTPUT --- ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 microcks.m.minikube.local keycloak.m.minikube.local 255.255.255.255 broadcasthost ::1 localhost Now create a new namespace and do the install in this namespace:\n$ kubectl create namespace microcks $ helm repo add microcks https://microcks.io/helm $ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.m.minikube.local --set keycloak.url=keycloak.m.minikube.local --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --- OUTPUT --- NAME: microcks LAST DEPLOYED: Tue Dec 19 15:23:23 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.m.minikube.local. GRPC mock service is available at \u0026#34;microcks-grpc.m.minikube.local\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.m.minikube.local to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Wait for the images to be pulled, pods to be started and ingresses to be there:\n$ kubectl get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-865b66d867-httf7 1/1 Running 0 56s microcks-keycloak-5bd7866b5f-9kr8k 1/1 Running 0 56s microcks-keycloak-postgresql-6cfc7bf6c4-qb9rv 1/1 Running 0 56s microcks-mongodb-d584889cf-wnzzb 1/1 Running 0 56s microcks-postman-runtime-5cbc478db7-rzprn 1/1 Running 0 56s $ kubectl get ingresses -n microcks --- OUTPUT --- NAME CLASS HOSTS ADDRESS PORTS AGE microcks nginx microcks.m.minikube.local 192.168.49.2 80, 443 2m4s microcks-grpc nginx microcks-grpc.m.minikube.local 192.168.49.2 80, 443 2m4s microcks-keycloak nginx keycloak.m.minikube.local 192.168.49.2 80, 443 2m4s To access the ingress from your browser, you\u0026rsquo;ll need to start the networking tunneling service of Minikube - it may ask for sudo permission depending on when you did open your latest session:\n$ minikube tunnel --- OUTPUT --- ✅ Tunnel démarré avec succès 📌 REMARQUE : veuillez ne pas fermer ce terminal car ce processus doit rester actif pour que le tunnel soit accessible... ❗ Le service/ingress microcks nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-keycloak. ❗ Le service/ingress microcks-grpc nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🏃 Tunnel de démarrage pour le service microcks. 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-grpc. ❗ Le service/ingress microcks-keycloak nécessite l\u0026#39;exposition des ports privilégiés : [80 443] 🔑 sudo permission will be asked for it. 🏃 Tunnel de démarrage pour le service microcks-keycloak. Start opening https://keycloak.m.minikube.local in your browser to validate the self-signed certificate. Once done, you can visit https://microcks.m.minikube.local in your browser, validate the self-signed certificate and start playing around with Microcks!\nThe default user/password is admin/microcks123\n4. Install Microcks with asynchronous options In this section, we\u0026rsquo;re doing a complete install of Microcks, enabling the asynchronous protcols features. This requires deploying additional pods and a Kafka cluster. Microcks install can install and manage its own cluster using the Strimzi project.\nTo be able to expose the Kafka cluster to the outside of Minikube, you’ll need to enable SSL passthrough on nginx. This require updating the default ingress controller deployment:\n$ kubectl patch -n ingress-nginx deployment/ingress-nginx-controller --type=\u0026#39;json\u0026#39; \\ -p \u0026#39;[{\u0026#34;op\u0026#34;:\u0026#34;add\u0026#34;,\u0026#34;path\u0026#34;:\u0026#34;/spec/template/spec/containers/0/args/-\u0026#34;,\u0026#34;value\u0026#34;:\u0026#34;--enable-ssl-passthrough\u0026#34;}]\u0026#39; Then, you\u0026rsquo;ll also have to update your /etc/hosts file so that we’ll can access Microcks Kafka broker using an Ingress. Add the line containing microcks-kafka.kafka.m.minikube.local and microcks-kafka-0.kafka.m.minikube.local hosts:\n$ cat /etc/hosts --- OUTPUT --- ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 microcks.m.minikube.local keycloak.m.minikube.local microcks-kafka.kafka.m.minikube.local microcks-kafka-0.kafka.m.minikube.local 255.255.255.255 broadcasthost ::1 localhost You\u0026rsquo;ll still need to have the minikube tunnel services up-and-running like in the previous section. Next, you have to install the latest version of Strimzi operator:\n$ kubectl apply -f \u0026#39;https://strimzi.io/install/latest?namespace=microcks\u0026#39; -n microcks Now, you can install Microcks using the Helm chart and enable the asynchronous features:\n$ helm install microcks microcks/microcks --namespace microcks --set microcks.url=microcks.m.minikube.local --set keycloak.url=keycloak.m.minikube.local --set keycloak.privateUrl=http://microcks-keycloak.microcks.svc.cluster.local:8080 --set features.async.enabled=true --set features.async.kafka.url=kafka.m.minikube.local --- OUTPUT --- NAME: microcks LAST DEPLOYED: Tue Dec 26 15:07:35 2023 NAMESPACE: microcks STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing microcks. Your release is named microcks. To learn more about the release, try: $ helm status microcks $ helm get microcks Microcks is available at https://microcks.m.minikube.local. GRPC mock service is available at \u0026#34;microcks-grpc.m.minikube.local\u0026#34;. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-microcks-grpc-secret -n microcks -o jsonpath=\u0026#39;{.data.tls\\.crt}\u0026#39; | base64 -d \u0026gt; tls.crt Keycloak has been deployed on https://keycloak.m.minikube.local to protect user access. You may want to configure an Identity Provider or add some users for your Microcks installation by login in using the username and password found into \u0026#39;microcks-keycloak-admin\u0026#39; secret. Kafka broker has been deployed on microcks-kafka.kafka.m.minikube.local. It has been exposed using TLS passthrough on the Ingress controller, you should extract the certificate for your client using: $ kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath=\u0026#39;{.data.ca\\.crt}\u0026#39; | base64 -d \u0026gt; ca.crt Watch and check the pods you should get in the namespace (this can take a bit longer if you pull Kafka images for the first time):\n$ kc get pods -n microcks --- OUTPUT --- NAME READY STATUS RESTARTS AGE microcks-5fbf679987-kzctj 1/1 Running 1 (116s ago) 4m32s microcks-async-minion-ddfc99cf5-lcs7s 1/1 Running 5 (101s ago) 4m32s microcks-kafka-entity-operator-5755ff865-f4ktn 2/2 Running 1 (114s ago) 2m37s microcks-kafka-kafka-0 1/1 Running 0 3m microcks-kafka-zookeeper-0 1/1 Running 0 4m29s microcks-keycloak-589f68fb76-xdn5w 1/1 Running 1 (4m9s ago) 4m32s microcks-keycloak-postgresql-6cfc7bf6c4-4mc79 1/1 Running 0 4m32s microcks-mongodb-d584889cf-m74mc 1/1 Running 0 4m32s microcks-postman-runtime-5d859fcdc4-zttkv 1/1 Running 0 4m32s strimzi-cluster-operator-75d7f76545-k9scj 1/1 Running 0 6m40s Now you can extract the Kafka cluster certificate using kubectl get secret microcks-kafka-cluster-ca-cert -n microcks -o jsonpath='{.data.ca\\.crt}' | base64 -d \u0026gt; ca.crt and apply the checks found at Async Features with Docker Compose.\nStart with loading the User signed-up API sample within your Microcks instance - remember that you have to validate the self-signed certificates like in the basic install first.\nNow connect to the Kafka broker pod to check a topic has been correctly created and that you can consume messages from there:\n$ kubectl -n microcks exec microcks-kafka-kafka-0 -it -- /bin/sh --- INPUT --- sh-4.4$ cd bin sh-4.4$ ./kafka-topics.sh --bootstrap-server localhost:9092 --list UsersignedupAPI-0.1.1-user-signedup __consumer_offsets microcks-services-updates sh-4.4$ ./kafka-console-consumer.sh --bootstrap-server microcks-kafka-kafka-bootstrap:9092 --topic UsersignedupAPI-0.1.1-user-signedup {\u0026#34;id\u0026#34;: \u0026#34;sinHVoQvNdA3Bhl4fi57IVI15390WBkn\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599175911\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;650YIRQaB2OsG52txubYAEJfdFB3jOzh\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599175914\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;QWimzV9X1BRgIodOWoDdsP9EKtFSniDW\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599185914\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;ivMQIz7J7IXqps5yqcaVo6qvuByhviVk\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599185921\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;hEUfxuQRHHZkt9zFzMl5ti9DOIp12vpd\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703599195914\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;OggnbfXX67QbfeMGXOTiOGT2BuqEPCPL\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703599195926\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} ^CProcessed a total of 6 messages sh-4.4$ exit exit command terminated with exit code 130 And finally, from your Mac host, you can install the kcat utility to consume messages as well. You\u0026rsquo;ll need to refer the ca.crt certificate you previsouly extracted from there:\n$ kcat -b microcks-kafka.kafka.m.minikube.local:443 -X security.protocol=SSL -X ssl.ca.location=ca.crt -t UsersignedupAPI-0.1.1-user-signedup --- OUTPUT --- % Auto-selecting Consumer mode (use -P or -C to override) {\u0026#34;id\u0026#34;: \u0026#34;FrncZaUsQFWPlcKSm4onTrw3o0sXhMkJ\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703600745149\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;EFcTdsrMuxKJiJUUikJnnSZWaKxltfJ0\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703600745275\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} {\u0026#34;id\u0026#34;: \u0026#34;Kxqp7P75cM07SwasVcK3MIsLp5oWUD52\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1703600755112\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} {\u0026#34;id\u0026#34;:\u0026#34;p2c3SbFoGflV4DzjsyA8cLqCsCZQ96fC\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1703600755117\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;John Doe\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:36} [...] % Reached end of topic UsersignedupAPI-0.1.1-user-signedup [0] at offset 106 ^C% 5. Delete everything and stop the cluster Deleting the microcks Helm release from your cluster is straightforward. Then you can finally stop your Minikube cluster to save some resources!\n$ helm delete microcks -n microcks --- OUTPUT --- release \u0026#34;microcks\u0026#34; uninstalled $ minikube stop --- OUTPUT --- ✋ Nœud d\u0026#39;arrêt \u0026#34;minikube\u0026#34; ... 🛑 Mise hors tension du profil \u0026#34;minikube\u0026#34; via SSH… 🛑 1 nœud arrêté. Wrap-up You\u0026rsquo;ve been through this guide and learned how to install Microcks on a Kubernetes cluster using Helm. Congrats! 🎉\nIf you\u0026rsquo;d like to learn more about all the available installation parameters, you can check our Helm Chart Parameters reference documentation.\nHappy learning!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/first-soap-mock/","title":"Your 1st Soap mock","description":"","searchKeyword":"","content":"Overview This tutorial is a step-by-step walkthrough on how to use SoapUI projects to get mocks for your SOAP WebService. This is hands-on introduction to SoapUI Conventions reference that brings all details on conventions being used.\nWe will go through a practical example based on the famous PetStore API. We’ll build the reference petstore-1.0-soapui-project.xml file by iterations, highlighting the details to get you starting with mocking SOAP WebServices on Microcks.\nOf course, to complete this tutorial, you will need to install SoapUI to define mocks on top of the WSDL file that describes your SOAP WebService interface. To validate that our mock is working correctly, you\u0026rsquo;ll be able to reuse SoapUI as-well but we\u0026rsquo;ll also provide simple curl commands.\nLet\u0026rsquo;s start! 💥\n1. Setup Microcks, a WSDL skeleton and a SoapUI project First mandatory step is obviously to setup Microcks 😉. For OpenAPI usage, we don\u0026rsquo;t need any particular setup and the simple docker way of deploying Microcks as exposed in Getting started is perfectly suited. Following the getting started, you should have a Microcks running instance on http://localhost:8585.\nThis could be on another port if 8585 is already used on your machine.\nFollowing the getting started, you should have a Microcks running instance on http://localhost:8585.\nNow let start with the skeleton of our WSDL contract for the Petstore Service. We\u0026rsquo;ll start with the definition of two different types:\nPet is the data structure that represents a registered pet in our store - it has an id and a name, PetsResponse is a structure that allows returning many pets as a service method result. We also have the definition of one getPets operation that allow returning all the pets in the store. This is over-simplistic but enough to help demonstrate how to do things. Here\u0026rsquo;s the WSDL contract (yes, it\u0026rsquo;s pretty verbose 😅):\n\u0026lt;?xml version=\u0026#39;1.0\u0026#39; encoding=\u0026#39;UTF-8\u0026#39;?\u0026gt; \u0026lt;wsdl:definitions xmlns:wsdl=\u0026#34;http://schemas.xmlsoap.org/wsdl/\u0026#34; xmlns:tns=\u0026#34;http://www.acme.org/petstore\u0026#34; xmlns:soap=\u0026#34;http://schemas.xmlsoap.org/wsdl/soap/\u0026#34; name=\u0026#34;PetstoreService\u0026#34; targetNamespace=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;wsdl:types\u0026gt; \u0026lt;xs:schema xmlns:xs=\u0026#34;http://www.w3.org/2001/XMLSchema\u0026#34; xmlns:tns=\u0026#34;http://www.acme.org/petstore\u0026#34; targetNamespace=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;xs:complexType name=\u0026#34;Pet\u0026#34;\u0026gt; \u0026lt;xs:sequence\u0026gt; \u0026lt;xs:element name=\u0026#34;id\u0026#34; type=\u0026#34;xs:int\u0026#34;/\u0026gt; \u0026lt;xs:element name=\u0026#34;name\u0026#34; type=\u0026#34;xs:string\u0026#34;/\u0026gt; \u0026lt;/xs:sequence\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;xs:complexType name=\u0026#34;PetsResponse\u0026#34;\u0026gt; \u0026lt;xs:sequence\u0026gt; \u0026lt;xs:element minOccurs=\u0026#34;0\u0026#34; maxOccurs=\u0026#34;unbounded\u0026#34; name=\u0026#34;pet\u0026#34; type=\u0026#34;tns:Pet\u0026#34; /\u0026gt; \u0026lt;/xs:sequence\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;xs:element name=\u0026#34;getPets\u0026#34;\u0026gt; \u0026lt;xs:complexType/\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;xs:element name=\u0026#34;getPetsResponse\u0026#34; type=\u0026#34;tns:PetsResponse\u0026#34; /\u0026gt; \u0026lt;/xs:schema\u0026gt; \u0026lt;/wsdl:types\u0026gt; \u0026lt;wsdl:message name=\u0026#34;getPets\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:getPets\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:message name=\u0026#34;getPetsResponse\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:getPetsResponse\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:portType name=\u0026#34;PetstoreService\u0026#34;\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;getPets\u0026#34;\u0026gt; \u0026lt;wsdl:input message=\u0026#34;tns:getPets\u0026#34; name=\u0026#34;getPets\u0026#34;/\u0026gt; \u0026lt;wsdl:output message=\u0026#34;tns:getPetsResponse\u0026#34; name=\u0026#34;getPetsResponse\u0026#34;/\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:portType\u0026gt; \u0026lt;wsdl:binding name=\u0026#34;PetstoreServiceSoapBinding\u0026#34; type=\u0026#34;tns:PetstoreService\u0026#34;\u0026gt; \u0026lt;soap:binding style=\u0026#34;document\u0026#34; transport=\u0026#34;http://schemas.xmlsoap.org/soap/http\u0026#34; /\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;getPets\u0026#34;\u0026gt; \u0026lt;soap:operation soapAction=\u0026#34;http://www.acme.org/petstore/getPets\u0026#34; style=\u0026#34;document\u0026#34; /\u0026gt; \u0026lt;wsdl:input\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:input\u0026gt; \u0026lt;wsdl:output\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:output\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:binding\u0026gt; \u0026lt;wsdl:service name=\u0026#34;PetstoreService\u0026#34;\u0026gt; \u0026lt;wsdl:port binding=\u0026#34;tns:PetstoreServiceSoapBinding\u0026#34; name=\u0026#34;PetstoreServiceEndpointPort\u0026#34;\u0026gt; \u0026lt;soap:address location=\u0026#34;http://localhost:8080/services/PetstoreService\u0026#34; /\u0026gt; \u0026lt;/wsdl:port\u0026gt; \u0026lt;/wsdl:service\u0026gt; \u0026lt;/wsdl:definitions\u0026gt; From now, you can save this as a file on your disk - or your can retreive our finalized petstore-1.0.wsdl file. Then open SoapUI and choose New SOAP Project in the File menu or from the top buttons bar. Give your project a name like PetstoreService and choose to Upload this file as its definition. It should create a new folder for your project on the left pane initalized with a Service named PetstoreServiceSoapBinding.\nWe now have some more initalization work to do. This is a four steps process that is illustrated below in the slider (you can the blue dots to freeze the swiper below):\n1️⃣ Right-click on the imported binding and ask SoapUI to generate a new mock server for this binding,,\n2️⃣ Keep the default options on the generation form. You can check that the getPets operation is correctly detected,\n3️⃣ You have now to name the mock server - this will be the name that will appear later in Microcks. Prefer something simple like PetstoreService for example,\n4️⃣ On the newly created mock server, use the left pane to add a custom properties named version having 1.0 as a value. This is one of our conventions for SoapUI projects.\nTo finish this first preparation step, you can save this project as a XML file on your disk, then open Microcks in your browser and go to the Importers page in the left navigation menu and choose to Upload this file. The file should import correctly and you should receive a toast notifiation on the upper right corner. Then, while browsing APIs | Services, you should get acess to the following details in Microcks:\n2. Specifying mock data with SoapUI We have loaded a SoapUI project in Microcks that correctly discovered the structure of your WebService, but you have no sample data loaded at the moment. We\u0026rsquo;re going to fix this using SoapUI by defining:\na Ressponse in the PetstoreService Mock Server, a Request in a new Test Suite for our Service. Let\u0026rsquo;s start by the request. This is a three steps process that is illustrated below in the slider (you can the blue dots to freeze the swiper below):\n1️⃣ Right-click on the imported binding and ask SoapUI to generate a new test suite server for this binding,,\n2️⃣ Keep the default options on the generation form. You can check that the getPets operation is correctly detected,\n3️⃣ You can now rename the mock server. I like sticking with simple names like PetstoreService,\nYou can open and check the default getPets request that has been created in the Test Suite. This one is basic has we have no arguments in the request.\nLet\u0026rsquo;s now take care of the rseponse definition. The PetstoreService Mock Server has been initialized with a default request named Request 1. To tell Microcks that this one should match with the request we just defined, we have to rename it and simply call it getPets as well. This is one of our conventions for SoapUI projects.\nEdit the content of this response to put some sample data:\n\u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;soapenv:Header/\u0026gt; \u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:getPetsResponse\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;1\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Zaza\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;2\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Tigress\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;3\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Maki\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;4\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Toufik\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;/pet:getPetsResponse\u0026gt; \u0026lt;/soapenv:Body\u0026gt; \u0026lt;/soapenv:Envelope\u0026gt; Finally, the last thing we have to do is to change the dispatcher that is set on the Mock Server getPets operation. As illustrated below, change its value from the default SEQUENCE to RANDOM:\n🚨 Take care of saving your SoapUI project after your edits!\n3. Basic operation of SOAP service It\u0026rsquo;s now the moment to import this SoapUI Project back in Microcks and see the results! Go to the Importers page in the left navigation menu and choose to Upload this file. Your SOAP WebService details should now have been updated with the samples you provided via the SoapUI Project:\n🤔 You may have noticed in the above section and screenshot that dispatching rules are empty for now. This is normal as we\u0026rsquo;re on a basic operation with no routing logic. We\u0026rsquo;ll talk about dispatchers in next section.\nMicrocks has found getPets as a valid sample to build a simulation upon. A mock URL has been made available. We can use this to test the query as demonstrated below with a curl command:\n$ curl -X POST \u0026#39;http://localhost:8585/soap/PetstoreService/1.0\u0026#39; -H \u0026#39;Content-Type: application/xml\u0026#39; \\ -d \u0026#39;\u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt;\u0026lt;soapenv:Header/\u0026gt;\u0026lt;soapenv:Body\u0026gt;\u0026lt;pet:getPets/\u0026gt;\u0026lt;/soapenv:Body\u0026gt;\u0026lt;/soapenv:Envelope\u0026gt;\u0026#39; \u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;soapenv:Header/\u0026gt; \u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:getPetsResponse\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;1\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Zaza\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;2\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Tigress\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;3\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Maki\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;4\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Toufik\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;/pet:getPetsResponse\u0026gt; \u0026lt;/soapenv:Body\u0026gt; \u0026lt;/soapenv:Envelope\u0026gt; This is your first gRPC mock 🎉 Nice achievement!\n4. Using SOAP request element Let\u0026rsquo;s make things a bit more spicy by adding request parameters. Now assume we want to provide a simple searching operation to retrieve all pets in store using simple filter. We\u0026rsquo;ll end up adding a new searchPets method in our WebService. Of course, we\u0026rsquo;ll have to define a new searchPetsRequest input message so that users will specify name=zoe to get all the pets having zoe in name.\nSo we\u0026rsquo;ll add new things in our WSDL document like below: new elements, messages and we complete the service with a new saerchPets operation:\n\u0026lt;wsdl:types\u0026gt; \u0026lt;xs:schema\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;xs:element name=\u0026#34;searchPets\u0026#34;\u0026gt; \u0026lt;xs:complexType\u0026gt; \u0026lt;xs:sequence\u0026gt; \u0026lt;xs:element minOccurs=\u0026#34;1\u0026#34; maxOccurs=\u0026#34;1\u0026#34; name=\u0026#34;name\u0026#34; type=\u0026#34;xs:string\u0026#34; /\u0026gt; \u0026lt;/xs:sequence\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;xs:element name=\u0026#34;searchPetsResponse\u0026#34; type=\u0026#34;tns:PetsResponse\u0026#34; /\u0026gt; \u0026lt;/xs:schema\u0026gt; \u0026lt;/wsdl:types\u0026gt; \u0026lt;wsdl:message name=\u0026#34;searchPets\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:searchPets\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:message name=\u0026#34;searchPetsResponse\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:searchPetsResponse\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:portType name=\u0026#34;PetstoreService\u0026#34;\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;searchPets\u0026#34;\u0026gt; \u0026lt;wsdl:input message=\u0026#34;tns:searchPets\u0026#34; name=\u0026#34;searchPets\u0026#34;/\u0026gt; \u0026lt;wsdl:output message=\u0026#34;tns:searchPetsResponse\u0026#34; name=\u0026#34;searchPetsResponse\u0026#34;/\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:portType\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:binding name=\u0026#34;PetstoreServiceSoapBinding\u0026#34; type=\u0026#34;tns:PetstoreService\u0026#34;\u0026gt; \u0026lt;soap:binding style=\u0026#34;document\u0026#34; transport=\u0026#34;http://schemas.xmlsoap.org/soap/http\u0026#34; /\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;searchPets\u0026#34;\u0026gt; \u0026lt;soap:operation soapAction=\u0026#34;http://www.acme.org/petstore/searchPets\u0026#34; style=\u0026#34;document\u0026#34; /\u0026gt; \u0026lt;wsdl:input\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:input\u0026gt; \u0026lt;wsdl:output\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:output\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:binding\u0026gt; You can then refresh the Service definition in SoapUI to have it detect the new operation. Still in SoapUi, you must now add the new operation to your Mock Server and a new Test Case to the existing Test Suite. Let\u0026rsquo;s complete our samples data with two new requests and responses for the new searchPets operation:\nOne request/response pair for searching for pets having a k in their name. We\u0026rsquo;ll name it searchPets K, Another request/response pair for searching for pets having a i in their name. We\u0026rsquo;ll name it searchPets I This is the results you should achieve below:\nWhat about the dispatcher property we mentioned earlier? FOr this operation, we\u0026rsquo;re going to use another dispatcher that allows to analyse the incoming SOAP body to find the correst response. This dispatcher is called QUERY_MATCH and uses XPath expression to extract data from incoming request to get the reponse.\nTo set this dispatcher configuration, you will have to go on the Mock Server searchPaets operation properties and select the appropriate QUERY_MATCH option. Then, for each request you\u0026rsquo;ll have to add a matching rule (let\u0026rsquo;s name them match_i and match_k for example) and define an XPath expression. You\u0026rsquo;ll have to use this expression below that declares an alias pet for the Xml namespace of your query and a selector to extract the incoming name property:\ndeclare namespace pet=\u0026#39;http://www.acme.org/petstore\u0026#39;; //pet:searchPets/name Then, based on this property value (k or i), you\u0026rsquo;ll define to return either the searchPets K or the searchPets I response. You should achieve the following results in SoapUI:\n🚨 Take care of saving your edits before exporting!\nImport this updated SoapUI Project back in Microcks and see the results:\nhttps://microcks.io/images/documentation/first-soap-searchPets.png does not exist\rLet\u0026rsquo;s try the new SOAP operation mock with this command:\n$ curl -X POST \u0026#39;http://localhost:8080/soap/PetstoreService/1.0\u0026#39; -H \u0026#39;Content-Type: application/xml\u0026#39; \\ -d \u0026#39;\u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt;\u0026lt;soapenv:Header/\u0026gt;\u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:searchPets\u0026gt;\u0026lt;name\u0026gt;i\u0026lt;/name\u0026gt;\u0026lt;/pet:searchPets\u0026gt;\u0026lt;/soapenv:Body\u0026gt;\u0026lt;/soapenv:Envelope\u0026gt;\u0026#39; \u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;soapenv:Header/\u0026gt; \u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:searchPetsResponse\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;2\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Tigress\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;3\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Maki\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;4\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Toufik\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;/pet:searchPetsResponse\u0026gt; \u0026lt;/soapenv:Body\u0026gt; \u0026lt;/soapenv:Envelope and this one:\n$ curl -X POST \u0026#39;http://localhost:8080/soap/PetstoreService/1.0\u0026#39; -H \u0026#39;Content-Type: application/xml\u0026#39; \\ -d \u0026#39;\u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt;\u0026lt;soapenv:Header/\u0026gt;\u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:searchPets\u0026gt;\u0026lt;name\u0026gt;k\u0026lt;/name\u0026gt;\u0026lt;/pet:searchPets\u0026gt;\u0026lt;/soapenv:Body\u0026gt;\u0026lt;/soapenv:Envelope\u0026gt;\u0026#39; \u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;soapenv:Header/\u0026gt; \u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:searchPetsResponse\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;3\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Maki\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;pet\u0026gt; \u0026lt;id\u0026gt;4\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Toufik\u0026lt;/name\u0026gt; \u0026lt;/pet\u0026gt; \u0026lt;/pet:searchPetsResponse\u0026gt; \u0026lt;/soapenv:Body\u0026gt; \u0026lt;/soapenv:Envelope 🎉 Fantastic! We now have a mock with routing logic based on request Xml conetnt.\n5. Mocking a creation operation And now the final step! Let\u0026rsquo;s deal with a new method that allows registering a new pet within the Petstore. For that, you\u0026rsquo;ll typically have to define a new createPet operation on the PetstoreService. In order to be meaningful to the user of this operation, a mock would have to integrate some logic that reuse contents from the incoming request and/or generate sample data. That\u0026rsquo;s typically what we\u0026rsquo;re going to do in this last section 😉\nLet\u0026rsquo;s add such a new operation into the WSDL document file by adding the following elements:\n\u0026lt;wsdl:types\u0026gt; \u0026lt;xs:schema\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;xs:element name=\u0026#34;createPet\u0026#34;\u0026gt; \u0026lt;xs:complexType\u0026gt; \u0026lt;xs:sequence\u0026gt; \u0026lt;xs:element minOccurs=\u0026#34;1\u0026#34; maxOccurs=\u0026#34;1\u0026#34; name=\u0026#34;name\u0026#34; type=\u0026#34;xs:string\u0026#34; /\u0026gt; \u0026lt;/xs:sequence\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;xs:element name=\u0026#34;createPetResponse\u0026#34; type=\u0026#34;tns:Pet\u0026#34; /\u0026gt; \u0026lt;/xs:schema\u0026gt; \u0026lt;/wsdl:types\u0026gt; \u0026lt;wsdl:message name=\u0026#34;createPet\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:createPet\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:message name=\u0026#34;createPetResponse\u0026#34;\u0026gt; \u0026lt;wsdl:part element=\u0026#34;tns:screatePetResponse\u0026#34; name=\u0026#34;parameters\u0026#34; /\u0026gt; \u0026lt;/wsdl:message\u0026gt; \u0026lt;wsdl:portType name=\u0026#34;PetstoreService\u0026#34;\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;createPet\u0026#34;\u0026gt; \u0026lt;wsdl:input message=\u0026#34;tns:createPet\u0026#34; name=\u0026#34;createPet\u0026#34;/\u0026gt; \u0026lt;wsdl:output message=\u0026#34;tns:createPetResponse\u0026#34; name=\u0026#34;createPetResponse\u0026#34;/\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:portType\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:binding name=\u0026#34;PetstoreServiceSoapBinding\u0026#34; type=\u0026#34;tns:PetstoreService\u0026#34;\u0026gt; \u0026lt;soap:binding style=\u0026#34;document\u0026#34; transport=\u0026#34;http://schemas.xmlsoap.org/soap/http\u0026#34; /\u0026gt; \u0026lt;!-- [...] --\u0026gt; \u0026lt;wsdl:operation name=\u0026#34;createPet\u0026#34;\u0026gt; \u0026lt;soap:operation soapAction=\u0026#34;http://www.acme.org/petstore/createPet\u0026#34; style=\u0026#34;document\u0026#34; /\u0026gt; \u0026lt;wsdl:input\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:input\u0026gt; \u0026lt;wsdl:output\u0026gt; \u0026lt;soap:body use=\u0026#34;literal\u0026#34; /\u0026gt; \u0026lt;/wsdl:output\u0026gt; \u0026lt;/wsdl:operation\u0026gt; \u0026lt;/wsdl:binding\u0026gt; You can then refresh the Service definition in SoapUI to have it detect the new operation. Still in SoapUi, you must now add the new operation to your Mock Server and a new Test Case to the existing Test Suite. Let\u0026rsquo;s complete our samples data with a new request/response pair for the new createPet operation.\nThe request will use a statically defined pet name to be created (here Jojo in the screenshot) but, as said above, we want to define a smart mock with some logic. Thankfully, Microcks has this ability to generate dynamic mock content. When defining our example into SoapUI, we\u0026rsquo;re are going to use two specific notations that are:\n{{ randomInt(5,10) }} for asking Microcks to generate a random integer between 5 and 10 for us (remember: the other pets have ids going from 1 to 4), {{ request.body//*[local-name() = 'name'] }} for asking Microcks to reuse here the name property of the request body. Simply. Let\u0026rsquo;s complete our SoapUI with a new request and a new response - both named createPet- for the new createPet operation. Do not forget to also update the Disptacher of the Mock Server operation as illustrated below:\n🚨 Take care of saving your edits before exporting!\nImport this updated SoapUI Project back in Microcks and see the results:\nLet\u0026rsquo;s now finally test this new operation using some content and see what\u0026rsquo;s going on:\n$ curl -X POST \u0026#39;http://localhost:8585/soap/PetstoreService/1.0\u0026#39; -H \u0026#39;Content-Type: application/xml\u0026#39; \\ -d \u0026#39;\u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt;\u0026lt;soapenv:Header/\u0026gt;\u0026lt;soapenv:Body\u0026gt;\u0026lt;pet:createPet\u0026gt;\u0026lt;name\u0026gt;Rusty\u0026lt;/name\u0026gt;\u0026lt;/pet:createPet\u0026gt;\u0026lt;/soapenv:Body\u0026gt;\u0026lt;/soapenv:Envelope\u0026gt;\u0026#39; \u0026lt;soapenv:Envelope xmlns:soapenv=\u0026#34;http://schemas.xmlsoap.org/soap/envelope/\u0026#34; xmlns:pet=\u0026#34;http://www.acme.org/petstore\u0026#34;\u0026gt; \u0026lt;soapenv:Header/\u0026gt; \u0026lt;soapenv:Body\u0026gt; \u0026lt;pet:createPetResponse\u0026gt; \u0026lt;id\u0026gt;7\u0026lt;/id\u0026gt; \u0026lt;name\u0026gt;Rusty\u0026lt;/name\u0026gt; \u0026lt;/pet:createPetResponse\u0026gt; \u0026lt;/soapenv:Body\u0026gt; \u0026lt;/soapenv:Envelope\u0026gt; As a result we\u0026rsquo;ve got our pet name Rusty being returned with a new id being generated. Ta Dam! 🥳\n🛠️ As a validation, send a few more requests changing your pet name. You\u0026rsquo;ll check that given name is always returned and the id is actual random. But you can also go further by defining an advanced dispatcher that will inspect your request body content to decide which response must be sent back. Very useful to describe different creation or error cases!\nWrap-Up In this tutorial we have seen the basics on how Microcks can be used to mock responses of a SOAP WebService. We introduced some Microcks concepts like examples, dispatchers and templating features that are used to produce a live simulation. This definitely helps speeding-up the feedback loop on the ongoing design as the development of a consumer using this service.\nThanks for reading and let us know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/administration/","title":"Administration","description":"Here below all the guides related to **Administration**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/references/","title":"References","description":"Here below all the documentation pages related to **References**.","searchKeyword":"","content":"Microcks\u0026rsquo; references Welcome to Microcks References! Our References section documents the Microcks installation, configuration and usage parameters.\n💡 Remember Contribute to Microcks References\nCode isn\u0026rsquo;t the only way to contribute to OSS; Dev Docs are a huge help that benefit the entire OSS ecosystem. At Microcks, we value Doc contributions as much as every other type of contribution. ❤️\nTo get started as a Docs contributor:\nFamiliarize yourself with our project\u0026rsquo;s Contribution Guide and our Code of Conduct Head over to our Microcks Docs Board Pick an issue you would like to contribute to and leave a comment introducing yourself. This is also the perfect place to leave any questions you may have on how to get started If there is no work done in that Docs issue yet, feel free to open a PR and get started! Docs contributor questions\nDo you have a documentation contributor question and you\u0026rsquo;re wondering how to tag me into a GitHub discussion or PR? Have no fear!\nJoin us on Discord and use the #documentation channel to ping us!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/nats-support/","title":"NATS Mocking & Testing","description":"","searchKeyword":"","content":"Overview This guide shows you how to use a NATS protocol with Microcks. NATS is a Cloud Native, Open Source and High-performance Messaging technology. It is a single technology that enables applications to securely communicate across any combination of cloud vendors, on-premise, edge, web and mobile, and devices. Client APIs are provided in over 40 languages and frameworks and you can check out the full list of clients.\nMicrocks supports NATS as a protocol binding for AsyncAPI. That means that Microcks is able to connect to a NATS broker for publishing mock messages as soon as it receives a valid AsyncAPI Specification and to connect to any NATS broker in your organization to check that flowing messages are compliant to the schema described within your specification.\nLet\u0026rsquo;s go! 🚀\n1. Setup NATS broker connection First mandatory step here is to setup Microcks so that it will be able to connect to a NATS broker for sending mock messages. Microcks has been tested successfully with NATS version 2.9.8. It can be deployed as containerized workload on your Kubernetes cluster. Microcks does not provide any installation scripts or procedures ; please refer to projects or related products documentation.\nIf you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] nats: url: nats-broker.app.example.com:4222 username: microcks password: microcks The async feature should of course be enabled and then the important things to notice are located in to the nats block:\nurl is the hostname + port where broker can be reached by Microcks, username is simply the user to use for authenticating the connection, password represents this user credentials. If you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] nats: url: nats-broker.app.example.com:4222 username: microcks password: microcks Actual connection to the NATS broker will only be made once Microcks will send mock messages to it. Let see below how to use NATS binding with AsyncAPI.\n2. Use NATS in AsyncAPI As NATS is not the default binding into Microcks, you should explicitly add it as a valid binding within your AsyncAPI contract. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use NATS and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] bindings: nats: queue: my-nats-queue message: [...] payload: [...] You\u0026rsquo;ll notice that we just have to add a nats non empty block within the operation bindings. Just define one property (like queue for example) and Microcks will detect this binding has been specified. See the full binding spec for details.\nAs usual, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] message: [...] examples: - laurent: summary: Example for Laurent user headers: |- {\u0026#34;my-app-header\u0026#34;: 23} payload: |- {\u0026#34;id\u0026#34;: \u0026#34;{{randomString(32)}}\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;{{now()}}\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} - john: summary: Example for John Doe user headers: my-app-header: 24 payload: id: \u0026#39;{{randomString(32)}}\u0026#39; sendAt: \u0026#39;{{now()}}\u0026#39; fullName: John Doe email: [email protected] age: 36 If you\u0026rsquo;re now yet accustomed to it, you may wonder what it this {{randomFullName()}} notation? These are just Templating functions that allow generation of dynamic content! 😉\nNow simply import your AsyncAPI file into Microcks either using a Direct upload import or by defining a Importer Job. Both methods are described in this page.\n3. Validate your mocks Now it’s time to validate that mock publication of messages on the connected broker is correct. In a real world scenario this mean developing a consuming script or application that connects to the topic where Microcks is publishing messages.\nFor our User signed-up API, we have such a consumer in one GitHub repository.\nFollow the following steps to retrieve it, install dependencies and check the Microcks mocks:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/natsjs-client $ npm install $ node consumer.js nats-broker.app.example.com:4222 UsersignedupAPI-0.1.30-user/signedup microcks microcks Connecting to nats-broker.app.example.com:4222 on topic UsersignedupAPI-0.1.30-user/signedup { \u0026#34;id\u0026#34;: \u0026#34;eyN7TbotUwN6RTPD4mRwwStS8gBA7tI6\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1675085731224\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;id\u0026#34;: \u0026#34;IsjSzI7o910s30QXrJGeAfqgGEsPw9uO\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1675085731227\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } [...] 🎉 Fantastic! We are receiving the two different messages corresponding to the two defined examples each and every 3 seconds that is the default publication frequency. You\u0026rsquo;ll notice that each id and sendAt properties have different value sthanks to the templating notation.\n4. Run AsyncAPI tests Now the final step is to perform some test of the validation features in Microcks. As we will need API implementation for that it’s not as easy as writing HTTP based API implementation, we have some helpful scripts in our api-tooling GitHub repository. This scripts are made for working with the User signed-up API sample we used so far but feel free to adapt them for your own use.\nImagine that you want to validate messages from a QA environment with dedicated NATS broker. Still being in the natsjs-client folder, now use the producer.js utility script to publish messages on a user-signedups queue:\n$ node producer.js nats-broker-qa.app.example.com:4222 user-signedups qa-user qa-password Connecting to nats-broker-qa.app.example.com:4222 on topic user-signedups Sending {\u0026#34;id\u0026#34;:\u0026#34;itq382xi2usbz41nwel888\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1675089667454\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:41} Sending {\u0026#34;id\u0026#34;:\u0026#34;qfb0fn4yrff06ylrge5fh75\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1675089670454\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:41} [...] Do not interrupt the execution of the script for now.\nIf the QA broker access is secured - let\u0026rsquo;s say with credentials and custom certificates - we will first have to manage a Secret in Microcks to hold these informations. Within Microcks console, first go to the Administration section and the Secrets tab.\nAdministration and Secrets will only be available to people having the administrator role assigned. Please check this documentation for details.\nThe screenshot below illustrates the creation of such a secret for your QA NATS Broker with username, and credentials.\nOnce saved we can go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: nats://nats-broker-qa.app.example.com:4222/user-signedups that is referencing the NATS broker endpoint, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API, Timeout: Keep the default of 10 seconds, Secret: This is where you\u0026rsquo;ll select the QA NATS Broker you previously created. Launch the test and wait for some seconds and you should get access to the test results as illustrated below:\nThis is fine and we can see that Microcks captured messages and validated them against the payload schema that is embedded into the AsyncAPI specification. In our sample, every property is required and message does not allow additionalProperties to be present, sendAt is of string type.\nSo now let see what happened if we tweak that a bit\u0026hellip; Open the producer.js script in your favorite editor to put comments on lines 28 and 29 and to remove comments on lines 30 and 31. It\u0026rsquo;s removing the fullName measure and adding an unexpected displayName property and it\u0026rsquo;s also changing the type of the sendAt property as shown below after having restarted the producer:\n$ node producer.js nats-broker-qa.app.example.com:4222 user-signedups qa-user qa-password Connecting to nats-broker-qa.app.example.com:4222 on topic user-signedups Sending {\u0026#34;id\u0026#34;:\u0026#34;9x12cp2u40f01avend41ryw\u0026#34;,\u0026#34;sendAt\u0026#34;:1675092166658,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:41} Sending {\u0026#34;id\u0026#34;:\u0026#34;han9zjhmqhkzkl76epz4xm\u0026#34;,\u0026#34;sendAt\u0026#34;:1675092169659,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:41} Sending {\u0026#34;id\u0026#34;:\u0026#34;kdmsl91ydtn7xf99jzy8\u0026#34;,\u0026#34;sendAt\u0026#34;:1675092172660,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:41} [...] Relaunch a new test and you should get results similar to those below:\n🥳 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application or devices are sending garbage, Microcks will be able to spot this and inform you that the expected message format is not respected.\nWrap-Up In this guide we have seen how Microcks can also be used to send mock messages on a Google PubSub service connected to the Microcks instance. This helps speeding-up the development of application consuming these messages. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected message format and the one effectively used by real-life producers.\nThanks for reading and let you know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/direct-api/","title":"Generating Direct API","description":"","searchKeyword":"","content":"Overview Eventhough Microcks promotes a contract first approach for defining mocks, in real-life it may be difficult starting that way without a great maturity on API and Service contracts. You often need to play a bit with a fake API to really figure out their needs and how you should then design API contract. In order to help with this situation, Microcks offers the ability to directly generate an API that you may use as a sandbox.\nThis guide shows you how Microcks is able to easily generate, in a few clicks:\nREST API with CRUD operations (CRUD for Create-Retrieve-Update-Delete) and associated mocks that you\u0026rsquo;ll be able to use for recording, retrieving and deleting any type of JSON document, Event-Driven API with a single Publish operation with associated reference payload that will be used to simulate event emition whether on Kafka or WebSocket protocols. 1. Few concepts In a few clicks, Microcks is able to easily generate for you:\nREST API with CRUD operations (CRUD for Create-Retrieve-Update-Delete) and associated mocks that you\u0026rsquo;ll be able to use for recording, retrieving and deleting any type of JSON document, Event-Driven API with a single Publish operation with associated reference payload that will be used to simulate event emition whether on Kafka or WebSocket protocols. In order to access this Direct API wizard, just go to the API | Services repository and hit the Add Direct API\u0026hellip; button:\nEach kind of Direct API as the same common properties. After selecting the type, the wizard ask you to give the following API | Service properties:\nService Name and Version will be the unique identifiers of the new Direct API you want to create, Resource will the kind of resource that will be manage by this Direct API. 2. Generate a Direct REST API Create the API Let\u0026rsquo;s start with a basci Direct API: the Foo API!\nIn the next step of this wiazrd, you\u0026rsquo;ll have the ability to assign a Reference JSON Payload for your Direct API. When provided, this payload is used to infer a schema for the data exposed by this API. Schema information is then integrated into the generated API specifications.\n💡 Reference JSON Payload is optional for Direct REST API but mandatory for Direct Event driven API.\nNow, just hit the Next button, confirm on next screen and you\u0026rsquo;ll have a ready-to-use API that proposes different operations as shown in capture below.\nThis Direct REST API is immediately exposing mocks endpoints for the different operations. The corresponding OpenAPI contract is also directly available for download. It integrates schema information deduced from the reference payload you may have provided in the previous step.\nGiven the previously created Direct API, it is now possible to use the /dynarest/Foo+API/0.1/foo endpoint (append after your Microcks base URL) to interact with it. This Direct API is in fact agnostic to a payload you send to it as long as it is formatted as JSON. For example, you can easily record a new foo resource having a name and a bar attributes like this:\ncurl -X POST http://localhost:8080/dynarest/Foo%20API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;andrew\u0026#34;, \u0026#34;bar\u0026#34;: 223}\u0026#39; And you should receive the following response:\n{ \u0026#34;name\u0026#34; : \u0026#34;andrew\u0026#34;, \u0026#34;bar\u0026#34; : 223, \u0026#34;id\u0026#34; : \u0026#34;5a1eb52a710ffa9f0b7c6de8\u0026#34; } What has simply done Microcks is recorded your JSON payload and assigned it an id attribute.\nCreate resources Creating resource is useful but how to check what are the already existing resources ? Let create another bunch of foo resources like this:\ncurl -X POST http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; -d \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;andrew\u0026#34;, \u0026#34;bar\u0026#34;: 224}\u0026#39; curl -X POST http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; -d \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34;: 225}\u0026#39; curl -X POST http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; -d \u0026#39;{\u0026#34;name\u0026#34;:\u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34;: 226}\u0026#39; Now, just hitting the Resources button just next to Operations section, you should be able to check all the resources Microcks has recorded as being viable representations of the foo resource. Each of them has received a unique identifier:\nUsing Direct API in Microcks is thus a simple and super-fast mean of recording sample resources to illustrate what should be the future contract design!\nQuery resources Beyond the simple checking of created resources, those resources are also directly available through the endpoints corresponding to retrieval operations. As every resource recorded is identified using an id attribute, it s really easy to invoke the GET endpoint using this id like this:\ncurl -X GET http://localhost:8080/dynarest/Foo+API/0.1/foo/5a1eb52a710ffa9f0b7c6de8 This give you the JSON payload you have previously recorded!\n{ \u0026#34;name\u0026#34; : \u0026#34;andrew\u0026#34;, \u0026#34;bar\u0026#34; : 223, \u0026#34;id\u0026#34; : \u0026#34;5a1eb52a710ffa9f0b7c6de8\u0026#34; }` More sophisticated retrieval options are also available when using the listing endpoint of dynamic Service. Microcks follows the conventions of querying by example: you can specify a JSON document as data and it will be used as a prototype for retrieving recorded resources having the same attributes and same attribute values. For example, to get all the foo resources having a name of marina just issue this query:\ncurl -X GET http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;marina\u0026#34;}}\u0026#39; That will give you the following results:\n[{ \u0026#34;name\u0026#34; : \u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34; : 225, \u0026#34;id\u0026#34; : \u0026#34;5a1eb608710ffa9f0b7c6deb\u0026#34; }, { \u0026#34;name\u0026#34; : \u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34; : 226, \u0026#34;id\u0026#34; : \u0026#34;5a1eb613710ffa9f0b7c6dec\u0026#34; }] Microcks is also able to understand the operators you\u0026rsquo;ll find into MongoDB Query DSL syntax. Thus you\u0026rsquo;re able for example to filter results using a range for an integer value like this:\ncurl -X GET http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;bar\u0026#34;: {$gt: 223, $lt: 226} }}\u0026#39; With results:\n[{ \u0026#34;name\u0026#34; : \u0026#34;andrew\u0026#34;, \u0026#34;bar\u0026#34; : 224, \u0026#34;id\u0026#34; : \u0026#34;5a1eb5fd710ffa9f0b7c6dea\u0026#34; }, { \u0026#34;name\u0026#34; : \u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34; : 225, \u0026#34;id\u0026#34; : \u0026#34;5a1eb608710ffa9f0b7c6deb\u0026#34; }] You can also mix-and-match attribute values and DSL operators so that you may build more complex filters likte this one restricted the previous set of foo to those having only the name of marina:\ncurl -X GET http://localhost:8080/dynarest/Foo+API/0.1/foo -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34;: {$gt: 223, $lt: 226} }}\u0026#39; With results:\n[{ \u0026#34;name\u0026#34; : \u0026#34;marina\u0026#34;, \u0026#34;bar\u0026#34; : 225, \u0026#34;id\u0026#34; : \u0026#34;5a1eb608710ffa9f0b7c6deb\u0026#34; }] 3. Generate a Direct Event Driven API Direct API is able to also manage Event Driven API that are described using AsyncAPI specifications. Imagine a MyQuote API that notifies quotes updates on an asynchronous channel. You can define this API that way:\nThen adding a reference JSON payload - such a payload can also include some templating expressions to get some more dynamic data. Here we define producing random staock sybols and ranged price values:\nClicking Next some more time, you now have a Direct Async API that is immediately exposed on WebSocket endpoint and on the Kafka broker Microcks is attached to. It AsyncAPI specification is also directly available for download.\nLooking at the operation details, you can retrieve the information of the endpoints used by different protocols and issue commands to receive the different messages published by the mock engine:\n$ kcat -b my-cluster-kafka-bootstrap.apps.try.microcks.io:443 -t MyQuoteAPI-1.0-quotes -o end % Auto-selecting Consumer mode (use -P or -C to override) % Reached end of topic MyQuoteAPI-1.0-quotes [0] at offset 87 { \u0026#34;symbol\u0026#34;: \u0026#34;GOOG\u0026#34;, \u0026#34;price\u0026#34;: \u0026#34;124\u0026#34; } % Reached end of topic MyQuoteAPI-1.0-quotes [0] at offset 88 { \u0026#34;symbol\u0026#34;: \u0026#34;GOOG\u0026#34;, \u0026#34;price\u0026#34;: \u0026#34;121\u0026#34; } % Reached end of topic MyQuoteAPI-1.0-quotes [0] at offset 89 { \u0026#34;symbol\u0026#34;: \u0026#34;IBM\u0026#34;, \u0026#34;price\u0026#34;: \u0026#34;127\u0026#34; } % Reached end of topic MyQuoteAPI-1.0-quotes [0] at offset 90 { \u0026#34;symbol\u0026#34;: \u0026#34;GOOG\u0026#34;, \u0026#34;price\u0026#34;: \u0026#34;134\u0026#34; } [...] Wrap-up In a few step, you\u0026rsquo;ve discovered how easy it is to have Microcks generate fake APIs for you! This one may allow you to quickly bootstrap your API design and contracts while exposing mock endpoints that allows your consumers or partners to immediatly start testing your API.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/examples/","title":"API Examples Format","description":"","searchKeyword":"","content":"Introduction APIExamples format is Microcks\u0026rsquo; own specification format for defining examples intented to be used by Microcks mocks. It can be seen as a lightweight, general purpose specification to solely serve the need to provide mock datasets. The goal of this specification is to keep the Microcks adoption curve very smooth with development teams but also for non developers.\n💡 APIExamples artifacts are supported starting with Microcks 1.10.0.\nAPIExamples files are simple YAML and aim to be very easy to understand and edit. More over, the description is independant from the API protocol! We\u0026rsquo;re rather attached to describe examples depending on the API interaction style: Request/Response based or Event-driven/Asynchronous.\nFor ease of use, we provide a JSON Schema that you can download here. Thus, you can integrate it in your code editor and benefit from code completion and validation.\nAPIExamples documents are intended to be imported as secondary artifacts only ; thanks to the Multi-Artifacts support.\nAPI Examples properties Let start with an example! First, such an APIExamples file must always start with the below lines that allows to clearly identity the artifact type but also the Miccrocks API/Service it refers to.\napiVersion: mocks.microcks.io/v1alpha1 kind: APIExamples metadata: name: API Pastry - 2.0 version: \u0026#39;2.0.0\u0026#39; operations: [...] This above snippet is related to the API Pastry - 2.0 in version 2.0.0. That means that this API version should already exist into your repository, otherwise the document will be ignored during import.\nThe examples from this file will be organized by API/Service operation. So after the mandatory headers, you\u0026rsquo;ll find an operations: maker to start the examples definitions.\nDirect children of operations are the operation names like described below.\nRequest/Response based API In the case of a Request/Response based API, examples must be described using a request and a response attribute like in the example below:\n[...] operations: \u0026#39;GET /pastry/{name}\u0026#39;: Eclair Chocolat: request: parameters: name: Eclair Chocolat headers: Accept: application/json response: mediaType: application/json body: name: Eclair Chocolat description: Delicieux Eclair Chocolat pas calorique du tout size: M price: 2.5 status: unknown Eclair Chocolat Xml: request: parameters: name: Eclair Chocolat headers: Accept: text/xml response: status: \u0026#39;200\u0026#39; mediaType: text/xml body: |- \u0026lt;pastry\u0026gt; \u0026lt;name\u0026gt;Eclair Cafe\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Delicieux Eclair au Chocolat pas calorique du tout\u0026lt;/description\u0026gt; \u0026lt;size\u0026gt;M\u0026lt;/size\u0026gt; \u0026lt;price\u0026gt;2.5\u0026lt;/price\u0026gt; \u0026lt;status\u0026gt;unknown\u0026lt;/status\u0026gt; \u0026lt;/pastry\u0026gt; The above snippet is pretty straightforward to understand:\nThe operation GET /pastry/{name} has 2 examples defined: Eclair Chocolat and Eclair Chocolat Xml, Both examples should be matched to a pastry name of Eclair Chocolat, defined within the request parameters. Those parameters can contain any number of parameter mapped on operation path or on query parameters, Both request and response can define headers and a body - though it only makes sens to have a response body on this use-case, Request and response body can be defined as plain String (Json or Xml), as Yaml object or as Yaml array (automatically converted to Json during the import), A response may have additional attributes like the response status (optional - 200 is actually the default for REST API) and the mediaType of the response. The beauty of it is that the principles are kinda the same for a gRPC service:\n[...] operations: \u0026#39;greeting\u0026#39;: Laurent: request: body: firstname: Laurent lastname: Broudoux response: body: greeting: Hello Laurent Broudoux ! John: request: body: |- {\u0026#34;firstname\u0026#34;: \u0026#34;John\u0026#34;, \u0026#34;lastname\u0026#34;: \u0026#34;Doe\u0026#34;} response: body: greeting: Hello John Doe ! You can see that request and response bodies are specified either as Yaml objects or plain Json but are indeed converted in Protobuffer by Microcks underhood. You can also use APIExamples for a GraphQL API that way:\n[...] operations: film: film ZmlsbXM6MQ==: request: body: query: |- query { film(id: $id) { id title episodeID director starCount rating } } variables: id: ZmlsbXM6MQ== response: mediaType: application/json body: data: film: id: ZmlsbXM6MQ== title: A New Hope episodeID: 4 director: George Lucas starCount: 432 rating: 4.3 Event Driven/Asynchronous API Event Driven or Asynchronous interaction style API are a bit different as they just need to specify an eventMessage as the content of an example. Let\u0026rsquo;s have a look at the snippet below:\n[...] operations: \u0026#39;SUBSCRIBE /user/signedup\u0026#39;: jane: eventMessage: headers: my-app-header: 123 sentAt: \u0026#34;2024-07-14T18:01:28Z\u0026#34; payload: fullName: Jane Doe email: [email protected] age: 35 For this example named jane, we just have to specify and event messages made of optional headers and a mandatory body. Here again, the bodycan be specified as plain String, as an object or an array.\nImporting API Examples When you\u0026rsquo;re happy with your API Examples just put the result YAML or JSON file into your favorite Source Configuration Management tool, grab the URL of the file corresponding to the branch you want to use and add it as a regular Job import into Microcks. On import, Microcks should detect that it\u0026rsquo;s an APIExamples specification file and choose the correct importer.\n💡 Do not forget to tick the Secondary Artifact checkbox!\nSee it in action! Want to see it in action? Then, you can replay the tutorials below, replacing the Postman Collection parts with the corresponding APIExamples files 😉\nYour 1st GraphQL mock, but using the petstore-1.0-examples.yaml file, Your 1st gRPC mock, but using the petstore-v1-examples.yaml file, "},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/","title":"Artifacts Reference","description":"Here below all the documentation pages related to **Artifacts Reference**.","searchKeyword":"","content":"As exposed in the Main Concepts, Artifacts are the corner stone in Microcks as they hold valuable information on how your API or microservices are expected to work. One of Microcks\u0026rsquo;s beauties is that it uses standard specifications or standard tooling files as Artifacts, allowing you to reuse existing assets. Microcks will use constraints and examples from them to build its knowledge base.\nMicrocks supports the following specifications and tooling file formats as artifacts:\nWe provide built-in parsers and importers for the following formats:\nSoapUI projects files starting with version 5.1 of SoapUI. See the Microcks\u0026rsquo; SoapUI Conventions, Swagger v2 files. See the Microcks\u0026rsquo; Swagger Conventions, OpenAPI v3.x files in either YAML or JSON format. See the Microcks\u0026rsquo; OpenAPI Conventions, AsyncAPI v2.x and AsyncAPI v3.x files in either YAML or JSON format. See the Microcks\u0026rsquo; AsyncAPI Conventions, Postman collections files with v2.x file format, gRPC / Protocol buffers v3 .proto files. See the Microcks\u0026rsquo; gRPC Conventions, GraphQL Schema .graphql files. See the Microcks\u0026rsquo; GraphQL Conventions, HTTP Archive Format (HAR) JSON files. See the Microcks\u0026rsquo; HAR Conventions, Microcks may require those artifact files to follow some conventions in order to collect the valuable information it need. This documentation is a reference of those different conventions for the above mentioned formats.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/postman-conventions/","title":"Postman Conventions","description":"","searchKeyword":"","content":"Conventions In order to be correctly imported and understood by Microcks, your Postman Collection should follow a little set of reasonable conventions and best practices.\nYour Postman collection may contain one or more API definitions. However, because it\u0026rsquo;s a best practice to consider each API as an autonomous and isolated software asset, we\u0026rsquo;d recommend managing only one API definition per Postman collection and not mixing requests related to different APIs within the same Collection, Your Postman collection description should hold a custom property named version that allows tracking of API version. It is a good practice to change this version identifier for each API interface versioned changes. As of writing, Postman does not allow editing of such custom property although the Collection v2 format allow them. By convention, we allow setting it through the collection description using this syntax: version=1.0 - Here is now the full description of my collection.... We recommend having a look at our sample Postman collection for Test API to fully understand and see in action those conventions.\nIllustration Collection initialisation Collection initialization is done through Import of an existing resource into Postman. A best practice being using a \u0026ldquo;contract first\u0026rdquo; approach for API definition and management, you\u0026rsquo;ll typically choose to Import File or Import From Link referencing a Swagger or OpenAPI contract definition.\nThe screenshot below shows how to create a new collection from a Swagger file. We are using here the Test API Swagger file.\nAfter successful import and collection creation, you should get the following result into Postman: a valid Collection with a list of default requests created for your API paths and verbs. Elements of this list will be called Operations within Microcks. Here\u0026rsquo;s the result for our Test API:\nDefining Examples As stated by Postman documentation :\n❝ Developers can mock a request and response in Postman before sending the actual request or setting up a single endpoint to return the response. Establishing an example during the earliest phase of API development requires clear communication between team members, aligns their expectations, and means developers and testers can get started more quickly. ❞\nThe next step is now to create a bunch of examples for each of the requests/operations of your Collection as explained by the Postman documentation. You\u0026rsquo;ll give each example a meaningful name regarding the use-case it supposed to represent. Do not forget to save your example!\nDefining Test Scripts 💡 This is an optional step that is only required if you also want to use Microcks to test your Service or API implementation as the development process progresses.\nPostman allows to attach some test scripts defined in JavaScript to a request or Operation. Postman only allows you to attach scripts to the request level and not to examples. Such scripts should be written so that they can be applied to the different examples but Microcks offers some way to ease that. For a global view of tests in Postman and their capabilities, we recommend reading the Introduction to Scripts.\nAs an illustration to how Microcks use Postman and offers, let\u0026rsquo;s imagine we are still using the Test API we mentioned above. There\u0026rsquo;s an Operation allowing to retrieve an order using its unique identifier. We have followed the previous section and have defined 2 examples for the corresponding request in Collection. Now we want to write a test that ensure that when API is invoked, the returned order has the id we specified into URI. We will write a test script that way:\nYou will notice the usage of following JavaScript code: var expectedId = globals[\u0026quot;id\u0026quot;];. What does that mean? IN fact, globals is an array of variables managed by the Postman runtime. Usually, you have to pre-populate this array using Pre-request script. When running this test in Microcks, such pre-request initialization is automatically performed for you! Every variable used within your request definition (URI parameters or query string parameters) are injected into the globals context so that you can directly used them within your script.\nThe execution of Postman tests using Microcks follows this flow:\nFor each example defined for a request, collect URI and query string parameters as key/value pairs, Inject each pair within globals JavaScript array, Invoke request attached script with the globals injected into runtime context, Collect the results within tests array to detect success or failure. Here is another example of such a generic script that validates the received JSON content:\nThis script validates that all the JSON order objects returned in response all have the status that is requested using the query parameter status value. Otherwise, a Valid response assertion failure is thrown and stored into the tests array.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/developing-testcontainers/","title":"Developing with Testcontainers","description":"","searchKeyword":"","content":"Overview This guide will provide you with pointers on how to embed Microcks into your unit tests with the help of Testcontainers. The project now provides official modules for Testcontainers via a partenership with AtomicJar, the company behind this fantastic library!\nYou’ll learn how to automatically launch and shut down Microcks’ instances so that you can easily test your API clients and API contracts. You can find information on the official module on Testcontainers Microcks page.\nAs of today, we provide support for the following languages:\nJava ☕️ - starting from Java 8 to latest releases - via a library available on Maven Central, NodeJS / Typescript - via a Javascript library with types available on NPM, Golang - via a library distributed via our GitHub, .NET - starting from .NET 6.0 to latest releases - via a library available on Nuget. Let’s go 🧊\n1. Java support Our Testcontainers module for Java is io.github.microcks:microcks-testcontainers, you easily add it to your Maven or Gradle powered project.\nThe library is making usage of our Uber distribution and you can simply start Microcks that way:\nMicrocksContainer microcks = new MicrocksContainer( DockerImageName.parse(\u0026#34;quay.io/microcks/microcks-uber:1.10.0\u0026#34;)); microcks.start(); See our microcks-testcontainers-java repository for full details.\nSpring Boot integration Microcks Testcontainers can be easily integrated in a Spring Boot application using Spring Boot Developer Tools so that when running in test mode, Microcks can be wired into your application to provide mocks for your dependencies.\nSee our demo application for Spring Boot 🍃\nQuarkus integration Microcks Testcontainers has also being extended to provide a Quarkus Dev Service. That way Microcks can be automatically started, configured and wxired to your application when starting in dev:mode.\nSee our microcks-quarkus repository for full details as well as our demo application for Quarkus.\n2. NodeJS support Our Testcontainers module for Javascript is @microcks/microcks-testcontainers, you easily add it to your NPM or Yarn powered project.\nThe library is making usage of our Uber distribution and you can simply start Microcks that way:\nconst container = await new MicrocksContainer(\u0026#34;quay.io/microcks/microcks-uber:1.10.0\u0026#34;).start(); See our microcks-testcontainers-node repository for full details and our full demo application using NestJS\n3. Golang support Our Testcontainers module for Javscript is github.com/testcontainers/testcontainers-go, you easily add it to your Go mod file.\nThe library is making usage of our Uber distribution and you can simply start Microcks that way:\nmicrocksContainer, err := microcks.RunContainer(ctx, testcontainers.WithImage(\u0026#34;quay.io/microcks/microcks-uber:1.10.0\u0026#34;)) See our microcks-testcontainers-go repository for full details and our full demo application\n4. .NET support Our Testcontainers module for .NET is Microcks.Testcontainers, you easily add it to your .csproj file in your project.\nThe library is making usage of our Uber distribution and you can simply start Microcks that way:\nMicrocksContainer container = new MicrocksBuilder() .WithImage(\u0026#34;quay.io/microcks/microcks-uber:1.10.0\u0026#34;) .Build(); await container.StartAsync(); See our microcks-testcontainers-dotnet repository for full details and our (incoming) full demo application\nWrap-up Testcontainers + Microcks is really a powerful combo for simplifying the write-up of robust unit or integration tests where the fixtures can be directly deduced from specifications. And the best thing is that this tooling is totally independent of your technology stack! You can use them for NodeJS, Go, Java, Ruby development, or whatever!\nWe don\u0026rsquo;t provide a built-in module for the stack you\u0026rsquo;re using? Poke us on Discord, we\u0026rsquo;d really like to get your suggestion and help to get this rolling!\nIf you want to learn more about the underlying thoughts and alternatives you may have if you\u0026rsquo;re not running Testcontainers, here\u0026rsquo;s below a set of blog posts writtent during our explorations:\nMocking and contract-testing in your Inner Loop with Microcks - Part 1: Easy environment setup Mocking and contract-testing in your Inner Loop with Microcks - Part 2: Unit testing with Testcontainers Mocking and contract-testing in your Inner Loop with Microcks - Part 3: Quarkus Devservice FTW "},{"section":"Documentation","url":"https://microcks.io/documentation/tutorials/first-asyncapi-mock/","title":"Your 1st AsyncAPI on Kafka mock","description":"","searchKeyword":"","content":" 🪄 To Be Created\nThis is a new tutorial page that has to be written as part of our Refactoring Effort.\nGoal of this page\nWrite an AsyncAPI specification with Microcks convention Import it into Microcks Play with exposed mock endpoints "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/googlepubsub-support/","title":"Pub/Sub Mocking & Testing","description":"","searchKeyword":"","content":"Overview This guide shows you how to use a Google Pub/Sub messaging service with Microcks. Pub/Sub is an asynchronous and scalable messaging service that decouples services producing messages from services processing those messages. Pub/Sub allows services to communicate asynchronously, with latencies on the order of 100 milliseconds.\nMicrocks supports Google Pub/Sub as a protocol binding for AsyncAPI. That means that Microcks is able to connect to a Google Pub/Sub service for publishing mock messages as soon as it receives a valid AsyncAPI Specification and to connect to any Google Pub/Sub broker provided that Google Cloud Platform to check that flowing messages are compliant to the schema described within your specification.\nLet\u0026rsquo;s rock and roll! 🎸\n1. Setup Pub/Sub service connection First mandatory step here is to setup Microcks so that it will be able to connect to a Pub/Sub service for sending mock messages. Before doing that, you\u0026rsquo;ll need to ensure you got proper credentials in your cluster.\nAs accessing Google Pub/Sub is subject to authentication and authorization, the pre-requisite is to create an IAM Service Account in Google platform so that Microcks will reuse this identity to connect to the service. After you created this service account, you\u0026rsquo;ll need to create and get access to its key file. Result is typically a JSON file you\u0026rsquo;ll download on your machine.\nLet\u0026rsquo;s say you\u0026rsquo;ve called it my-googlecloud-service-account.json, you\u0026rsquo;ll then need to transfer this file as a Secret within your Kubernetes cluster into the namespace where you plan to setup Microcks - here after microcks:\n$ kubectl create secret generic my-googlecloud-service-account \\ --from-file=./my-googlecloud-service-account.json -n microcks You also have to ensure that this Service Account has the required permissions for connecting to Pub/Sub, listing and creating topics. This can be done by adding the roles/pubsub.editor and roles/pubsub.publisher roles to the Service Account. Check the Pub/Sub permissions and roles for more details. Here are below typical gcloud commands for that:\n$ gcloud projects add-iam-policy-binding $PROJECT \\ --member=serviceaccount:microcks-pubsub-sa@$PROJECT.iam.gserviceaccount.com \\ --role=roles/pubsub.editor $ gcloud projects add-iam-policy-binding $PROJECT \\ --member=serviceaccount:microcks-pubsub-sa@$PROJECT.iam.gserviceaccount.com \\ --role=roles/pubsub.publisher If you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] googlepubsub: project: my-gcp-project-347219 serviceAccountSecretRef: secret: my-googlecloud-service-account fileKey: my-googlecloud-service-account.json The async feature should of course be enabled and then the important things to notice are located in to the googlepubsub block:\nproject is the project identifier of your Google project where Pub/Sub service is located, serviceAccountSecretRef is the name + the file key name for the Secret holding our Service Account private key we just previoulsy created. If you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] googlepubsub: project: my-gcp-project-347219 serviceAccountSecretRef: secret: my-googlecloud-service-account fileKey: my-googlecloud-service-account.json Actual connection to the Google Pub/Sub service will only be made once Microcks will send mock messages to it. Let see below how to use Pub/Sub binding with AsyncAPI.\n2. Use Pub/Sub in AsyncAPI As Google Pub/Sub is not the default binding into Microcks, you should explicitly add it as a valid binding within your AsyncAPI contract. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use Google Pub/Sub and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] bindings: googlepubsub: topic: projects/my-project/topics/my-topic message: [...] payload: [...] You\u0026rsquo;ll notice that we just have to add a googlepubsub non empty block within the operation bindings. Just define one property (like topic for example) and Microcks will detect this binding has been specified. See the full binding spec for details.\nAs usual, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification.\nasyncapi: \u0026#39;2.0.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] message: [...] examples: - laurent: summary: Example for Laurent user headers: |\u0026gt; {\u0026#34;my-app-header\u0026#34;: 23} payload: |\u0026gt; {\u0026#34;id\u0026#34;: \u0026#34;{{randomString(32)}}\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;{{now()}}\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} - john: summary: Example for John Doe user headers: my-app-header: 24 payload: id: \u0026#39;{{randomString(32)}}\u0026#39; sendAt: \u0026#39;{{now()}}\u0026#39; fullName: John Doe email: [email protected] age: 36 If you\u0026rsquo;re now yet accustomed to it, you may wonder what it this {{randomFullName()}} notation? These are just Templating functions that allow generation of dynamic content! 😉\nNow simply import your AsyncAPI file into Microcks either using a Direct upload import or by defining a Importer Job. Both methods are described in this page.\n3. Validate your mocks Now it’s time to validate that mock publication of messages on the targeted Pub/Sub is correct. In a real world scenario this mean developing a consuming script or application that connects to the topic where Microcks is publishing messages.\nFor our User signed-up API, we have such a consumer in one GitHub repository. Like in previous Step 1, you\u0026rsquo;ll need a Service Account and its key file so that our consumer will be able to connect to Pub/Sub. This Service Account must have the roles/pubsub.subscriber role. If you choose to reuse previously created Service Account, you\u0026rsquo;ll have to issue this additional ommand:\n$ gcloud projects add-iam-policy-binding $PROJECT \\ --member=serviceaccount:microcks-pubsub-sa@$PROJECT.iam.gserviceaccount.com \\ --role=roles/pubsub.subscriber Now, with the Service Account key file at hand - let say in /Users/me/google-cloud-creds/my-gcp-project-347219/pubsub-service-account.json folder - you\u0026rsquo;ll have to follow these steps to retrieve it, install dependencies and check the Microcks mocks:\n$ git clone https://github.com/microcks/api-tooling.git $ cd api-tooling/async-clients/googlepubsub-client $ npm install $ node consumer.js my-gcp-project-347219 UsersignedupAPI-0.1.20-user-signedup /Users/me/google-cloud-creds/my-gcp-project-347219/pubsub-service-account.json Connecting to my-gcp-project-347219 on topic UsersignedupAPI-0.1.20-user-signedup with sub gpubsub-client-echo { \u0026#34;id\u0026#34;: \u0026#34;rZxAKnfxbe7yCXAJLENTHtnBI64H2KRN\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1675767350743\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41 } { \u0026#34;id\u0026#34;: \u0026#34;ApOlHGyEGEkZnDKeQ3CE3oLpqZ7vVL7v\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;1675767350743\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;John Doe\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 36 } [...] 🎉 Fantastic! We are receiving the two different messages corresponding to the two defined examples each and every 3 seconds that is the default publication frequency. You\u0026rsquo;ll notice that each id and sendAt properties have different values thanks to the templating notation.\n4. Run AsyncAPI tests Now the final step is to perform some tests of the validation features in Microcks. As we will need API implementation for that it’s not as easy as writing HTTP based API implementation, we have some helpful scripts in our api-tooling GitHub repository. This scripts are made for working with the User signed-up API sample we used so far but feel free to adapt them for your own use.\nImagine that you want to validate messages from a QA environment on a dedicated Google Cloud project. As the QA project access is secured, you\u0026rsquo;ll need - like described above in Step 1 - to retrieve a Service Account key file with this Service Account having the roles/pubsub.subscriber role like described in Step 3.\nStill being in the googlepubsub-client folder, now use the producer.js utility script to publish messages on a user-signups topic hosted by a my-qa-gcp-project-347223 project with local access to your Service Account key file:\n$ node producer.js my-qa-gcp-project-347223 user-signups /Users/me/google-cloud-creds/my-qa-gcp-project-347223/pubsub-service-account.json Connecting to my-qa-gcp-project-347223 on user-signups Sending {\u0026#34;id\u0026#34;:\u0026#34;jhlch3gv1dexkodt71zet\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1675848599703\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:43} Sending {\u0026#34;id\u0026#34;:\u0026#34;gm6c39oa69nw7dukbpper\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1675848602703\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:43} [...] Do not interrupt the execution of the script for now.\nAs the QA Pub/Sub access is secured, we will first have to manage a Secret in Microcks to hold these informations. Within Microcks console, first go to the Administration section and the Secrets tab.\nAdministration and Secrets will only be available to people having the administrator role assigned. Please check this documentation for details.\nOn this tab, you\u0026rsquo;ll have to create a Token Authentication secret with the value being the content of a Service Account key file encrypted in base 64. This Service Account is not necessarily the one you\u0026rsquo;ve used previously for producing messages as this one must have the roles/pubsub.publisher role. You\u0026rsquo;ll typically get the token value by executing this command:\n$ cat googlecloud-service-account.json | base64 The screenshot below illustrates the creation of such a secret for your QA PubSub Service Account with username, and credentials.\nOnce saved we can go create a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: googlepubsub://my-qa-gcp-project-347223/user-signups that is referencing the Google Pub/Sub service and topic endpoint, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API, Timeout: Keep the default of 10 seconds, Secret: This is where you\u0026rsquo;ll select the QA PubSub Service Account you previously created. Launch the test and wait for some seconds and you should get access to the test results as illustrated below:\nThis is fine and we can see that Microcks captured messages and validated them against the payload schema that is embedded into the AsyncAPI specification. In our sample, every property is required and message does not allow additionalProperties to be define, sendAt is of string type.\nSo now let see what happened if we tweak that a bit\u0026hellip; Open the producer.js script in your favorite editor to put comments on lines 24 and 25 and to remove comments on lines 26 and 27. It\u0026rsquo;s removing the fullName measure and adding an unexpected displayName property and it\u0026rsquo;s also changing the type of the sendAt property as shown below after having restarted the producer:\n$ node producer.js my-qa-gcp-project-347223 user-signups /Users/me/google-cloud-creds/my-qa-gcp-project-347223/pubsub-service-account.json Connecting to my-qa-gcp-project-347223 on user-signups Sending {\u0026#34;id\u0026#34;:\u0026#34;2zzo4kf16mxu5e6k8hyecl\u0026#34;,\u0026#34;sendAt\u0026#34;:1675946954300,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:43} Sending {\u0026#34;id\u0026#34;:\u0026#34;9ny4r1qu1p5xv37wxufshm\u0026#34;,\u0026#34;sendAt\u0026#34;:1675946957300,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:43} Sending {\u0026#34;id\u0026#34;:\u0026#34;uriayo3qh5b1z0y8zd5d7x\u0026#34;,\u0026#34;sendAt\u0026#34;:1675946960301,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:43} [...] Relaunch a new test and you should get results similar to those below:\n🥳 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application or devices are sending garbage, Microcks will be able to spot this and inform you that the expected message format is not respected.\nNote that even if the test duration is 10 seconds you may receive more messages that the number of messages sent by the producer during those 10 seconds\u0026hellip; 🤔 This is because Pub/Sub subscription that are mandatory for consuming messages have a minimum message retention policy of 10 minutes. Microcks is creating such subscription with the minimum of retention duration (10 minutes) and expiration delay (1 day). So depending on when you launch your test, you may reuse already created subscription that has accumulated messages before your test actually starts.\nWrap-Up In this guide we have seen how Microcks can also be used to send mock messages on a Google Pub/Sub managed service connected to the Microcks instance. This helps speeding-up the development of application consuming these messages. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected message format and the one effectively used by real-life producers.\nThanks for reading and let you know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/jenkins/","title":"Using in Jenkins Pipeline","description":"","searchKeyword":"","content":"Overview This guide illustrates how you can integrate the Microcks Jenkins plugin keep Microcks in-sync withe your API specifications and integrate tests stages within your Jenkins CI/CD pipelines. This plugin allows your Jenkins builds and jobs to import API Artifacts into a Microcks instance and to launch new Tests. It uses Service Account and so it\u0026rsquo;s definitely worth the read 😉\nThe Microcks Jenkins plugin has its own GitHub repository and its own lifecycle.\n1. Download the Jenkins plugin Microcks Jenkins plugin is available and can be downloaded from Central Maven repository. Just get the HPI file and install it on your Jenkins master your preferred way.\n2. Setup the Jenkins plugin This plugin is using identified Service Account when connecting to Microcks API. It is also able to manage multiple Microcks instances and hide the technical details from your Jobs using Microcks plugins.\nAs a Jenkins administrator, go to the Manage Jenkins page and find the Microcks section. You should be able to add and configure as many instance of Microcks installation as you want like in the 2 configured in screenshot below:\nA Microcks installation configuration need 5 parameters:\nA Name will be used by your Jobs or Pipelines as a reference of an environment, The API URL is the endpoint of your Microcks server receiving API calls, The Credentials to use for authenticating the Service Account and allowing it to retrieve an OAuth token. These are Credentials that should be registered into Jenkins, The Disable Cert Validation can be check if you have are using auto-signed certificates for example. You should then be able to test the connection to endpoints and save your configuration. Later, your Jobs and Pipelines will just use the installation Name to refer it from their build steps.\n3. Using the Jenkins plugin Jenkins plugin may be used in 2 ways:\nAs a simple Build Step using a form to define what service to test, As an action defined using Domain Specific Language within a Pipeline stage It provides two different actions or build steps: the Import API specification files in Microcks step and the Launch Microcks Test Runner step.\nImport API Build step usage When defining a new project into Jenkins GUI, you may want to add a new Import API specification files in Microcks step as shown in the capture below.\nThe parameters that can be set here are:\nThe Server: this is the Name your running isntance of Microcks that is registered into Jenkins (see the previous setup step), The Comma separated list of API specification to import: this is simply a /my/file/path[:is_primary],/my/file/path2[:is_primary] expression. You should point to local files in your job workspace, typically those coming from a checkout or clone from source repository). Optionally, you can specify if they should be considered as main or primary artifact (true value) or secondary artifact (false value). See Multi-artifacts explanations documentation. Default is true so it is considered as primary. DSL plugin usage When defining a new CI/CD pipeline - even through the Jenkins or OpenShift GUI or through a Jenkinsfile within your source repository - you may want to add a specific microcksImport within your pipeline script as the example below:\nnode(\u0026#39;master\u0026#39;) { stage (\u0026#39;build\u0026#39;) { // Clone sources from repo. git \u0026#39;https://github.com/microcks/microcks-cli\u0026#39; } stage (\u0026#39;importAPISpecs\u0026#39;) { // Add Microcks import here. microcksImport(server: \u0026#39;microcks-localhost\u0026#39;, specificationFiles: \u0026#39;samples/weather-forecast-openapi.yml:true,samples/weather-forecast-postman.json:false\u0026#39;) } stage (\u0026#39;promoteToProd\u0026#39;) { // ... } stage (\u0026#39;deployToProd\u0026#39;) { // ... } } The parameters that can be set here are the same that in Build Step usage but take care to cases and typos:\nThe server: this is the Name your running isntance of Microcks that is registered into Jenkins (see the previous setup step), The specificationFiles: this is simply a /my/file/path[:is_primary],/my/file/path2[:is_primary] expression. Launch Test Build step usage When defining a new project into Jenkins GUI, you may want to add a new Launch Microcks Test Runner step as shown in the capture below.\nThe parameters that can be set here are:\nThe Server: this is the Name your running isntance of Microcks that is registered into Jenkins (see the previous setup step), The Service Identifier to launch tests for: this is simply a service_name:service_version expression, The Test Endpoint to test: this is a valid endpoint where your service or API implementation has been deployed, The Runner Type to use: this is the test strategy you may want to have regarding endpoint, The Verbose flag: allows to collect detailed logs on Microcks plugin execution, The Timeout configuration: allows you to override default timeout for this tests. DSL plugin usage When defining a new CI/CD pipeline - even through the Jenkins or OpenShift GUI or through a Jenkinsfile within your source repository - you may want to add a specific microcksTest within your pipeline script as the example below:\nnode(\u0026#39;maven\u0026#39;) { stage (\u0026#39;build\u0026#39;) { // ... } stage (\u0026#39;deployInDev\u0026#39;) { // ... } stage (\u0026#39;testInDev\u0026#39;) { // Add Microcks test here. microcksTest(server: \u0026#39;microcks-minishift\u0026#39;, serviceId: \u0026#39;Beer Catalog API:0.9\u0026#39;, testEndpoint: \u0026#39;http://beer-catalog-impl-beer-catalog-dev.52.174.149.59.nip.io/api/\u0026#39;, runnerType: \u0026#39;POSTMAN\u0026#39;, verbose: \u0026#39;true\u0026#39;, waitTime: 5, waitUnit: \u0026#39;sec\u0026#39;) } stage (\u0026#39;promoteToProd\u0026#39;) { // ... } stage (\u0026#39;deployToProd\u0026#39;) { // ... } } The parameters that can be set here are the same that in Build Step usage but take care to cases and typos:\nThe server: this is the Name your running isntance of Microcks that is registered into Jenkins (see the previous setup step), The serviceId to launch tests for: this is simply a service_name:service_version expression, The testEndpoint to test: this is a valid endpoint where your service or API implementation has been deployed, The runnerType to use: this is the test strategy you may want to have regarding endpoint, The verbose flag: allows to collect detailed logs on Microcks plugin execution, The waitTime configuration: allows you to override the default time quantity for this tests. The waitUnit configuration: allows you to override the default time unit for this tests (values in milli, sec or min). Wrap-up Following this guide, You have learned how to get and use the Microcks GitHub Actions. The GitHub actions reuse the Microcks CLI and the Service Account and so it\u0026rsquo;s definitely worth the read 😉\nUsing Microcks and its Jenkins plugin, you may achieve some clean CI/CD pipelines that ensures your developed API implementation is fully aligned to expectations.\nThe most up-to-date information and reference documentation can be found into the repository README.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/soapui-conventions/","title":"SoapUI Conventions","description":"","searchKeyword":"","content":"Conventions In order to be correctly imported and understood by Microcks, your SoapUI project should follow a little set of reasonable conventions and best practices.\nYour SoapUI project may contain one or more Service definitions. However, because it\u0026rsquo;s a best practice to consider each Service or API as an autonomous and isolated software asset, we\u0026rsquo;d recommend managing only one Service definition per SoapUI project, Your SoapUI Mock Service should define a custom property named version that allows tracking of Service(s) version. It is a good practice to change this version identifier for each Service or API interface versioned changes, The name of Tests Requests should be something like \u0026quot;\u0026lt;sample_id\u0026gt; Request\u0026quot;. For example: \u0026quot;Karla Request\u0026quot;, The name of Mock Responses should be something like \u0026quot;\u0026lt;sample_id\u0026gt; Response\u0026quot;. For example: \u0026quot;Karla Response\u0026quot;, The name of matching rules should be something like \u0026quot;\u0026lt;sample_id\u0026gt;\u0026quot;. For example: \u0026quot;Karla\u0026quot;, We recommend having a look at our sample SoapUI projects for SOAP WebServices and for REST APIs to fully understand and see in action those conventions.\nIllustration Project initialization Project initialization is as simple as creating a new Empty Project in SoapUI. The Tests Request you will need to define later will be defined through a SoapUI TestSuite ; the Mock Responses you will need to define later will be defined through a SoapUI ServiceMock. It is a better choice to directly create those items through the wizard when choosing the Add WSDL or Add WADL actions once project has been created.\nThe screenshot below shows how to add a WSDL to an existing empty project :\nDefining Test Requests The sample requests that are used by Microcks are SoapUI TestSuite requests. So select the newly imported Service, right-click and choose Generate TestSuite. You should get this following screenshot where you select these options, validate and then give your TestSuite a name like \u0026quot;\u0026lt;Service\u0026gt; TestSuite\u0026quot; or something:\nYou are now free to create as many TestSteps as you want within the TestCases. TestCases represents the Operation level and TestSteps represents the request sample level. The screenshot below shows how we have created 2 sample requests (Andrew and Karla) for the sayHello operation of our WebService:\nAs shown above, you are also free to add some assertions within your TestStep requests. The SoapUI documentation introduces the assertion concept on this page. Assertions in TestSteps can be later reused when wanting to use Microcks for Contract testing of your Service.\nDefining Mock Responses Mock Responses you will need to define later will be defined through a SoapUI ServiceMock. You have to select the newly imported Service, right-click and choose Generate MockService. You can let the default options as shown below and give your MockService a name like \u0026quot;\u0026lt;Service\u0026gt; Mock\u0026quot;:\nYou will now be able to create as many Responses attached to Operation as you\u0026rsquo;ve got samples requests defined in previously created TestSteps. As introduced into the naming conventions, your responses must have the same \u0026quot;\u0026lt;sample_id\u0026gt; radix that the associated requests so that Microcks will be later able to associate them.\nThe screenshot above shows a Mock response corresponding to the Andrew Request. It is simply code Andrew Response. Note that you are free to setup any HTTP Header you want for responses, Microcks will reuse them later to issue real headers in responses.\n💡 Note that you can use templating notation into your SOAP responses for better/smarter/more dynamic responses. It brings specific features for XML like XPath expressions or context expressions you may have initialized using a SCRIPT dispatcher.\n💡 Note also that for compatibility purpose, Microcks supports the SoapUI expression notation: SoapUI ${ } notation will be translated into Microcks double-mustaches notation {{ }} internally. You may also of course directly use our {{ }} notation though 😉\nDefining dispatch rules Latest step is now to define a technical mean for Microcks to analyse an incoming requests and find out the corresponding response to return. This is done in Microcks via the concept of Dispatcher that represent a dispatch strategy and Dispatcher Rules that represent the dispatching parameters. Microcks supports 3 strategies for dispatching SOAP requests:\nVia the analysis of SOAP request payload through XPath, Via the evaluation of a Groovy script, Via the random dispatching strategy. These three strategies have equivalent in SoapUI via Dispatch configuration on each Operation of your Mock Service.\nUsing XPath expression After double-clicking on the operation node of your Mock Service, a window as shown in following screenshot should appear. If you want to use XPath for matching, select the QUERY_MATCH Dispatch and associate a Mock Response (upper section) with a new Match Rule (lower left) defining an XPath assertion (lower right). You can use the Extract helper in SoapUI if you\u0026rsquo;re not familiar to XPath expression.\n🚨 Warning: The XPath expression used by your different Match Rule must strictly be the same. You cannot used different expression for different rules.\nBelow the exemple of using the name find in incoming request to find a matching response.\nUsing a Groovy script Another way of defining matching rules is using a Groovy script. Such a script allows to define much more logic for finding a response for an incoming requests. With scripts you can use request payload but also have access to query string, http headers and so on.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/references/configuration/","title":"Configuration Reference","description":"Here below all the documentation pages related to **Configuration Reference**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/aws-sqs-sns-support/","title":"SQS/SNS Mocking & Testing","description":"","searchKeyword":"","content":"Overview This guide shows you how to use a Amazon SQS and Amazon SNS messaging services with Microcks. As those two services are very frequently used in combination, we decided to cover both of them in the same guide as principles and configuration are very similar. However, Microcks may provide mocking and testing services for SQS only and mocking and testing services for SNS only. You don\u0026rsquo;t have to use both to benefit from Microcks features.\nAmazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components. As stated by the name, it is a message queuing service where one message from a queue can only be consumed by one component. Amazon Simple Notification Service (SNS) sends notifications two ways and provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications.\nMicrocks supports Amazon SQS and SNS as a protocol bindings for AsyncAPI. That means that Microcks is able to connect to either SQS or SNS service for publishing mock messages as soon as it receives a valid AsyncAPI Specification and to connect to any SQS/SNS queue or topic to check that flowing messages are compliant to the schema described within your specification.\nLet\u0026rsquo;s go! 🏄♂️\n1. Setup AWS services connection First mandatory step here is to setup Microcks so that it will be able to connect to the target AWS service for sending mock messages. In order to do that, you\u0026rsquo;ll need to ensure you got proper credentials in your cluster.\nAs accessing AWS Service is subject to authentication and authorization, the pre-requisite is to create one IAM Account with the required policies. If you plan to use both services, we recommend creating two different accounts so that you\u0026rsquo;ll limit scope of risk.\nOnce you get your IAM account ready, you\u0026rsquo;ll need their access keys so that an application running outside of AWS will be able to use those services.\nFrom there, you have two options to provide the access keys details to Microcks:\nStore this access keys details directly as keys into a Kubernetes Secret - keys that will be injected as environment variables within your Microcks instance, Store this access keys details into a Profile file you\u0026rsquo;ll also wrap into a Kubernetes Secret. This Secret will be mounted on Microcks instance filesystem in read-only mode. Based on the option you chose and the way you handle secrets in your cluster, you\u0026rsquo;ll typically have to issue one of this command.\nCreate a Secret for environment variable usage:\n$ kubectl create secret generic my-aws-credentials \\ --from-literal=access_key_id=$AWS_ACCESS_KEY_ID \\ --from-literal=secret_access_key=$AWS_SECRET_ACCESS_KEY \\ --from-literal=secret_token_key=$AWS_SESSION_TOKEN \\ -n microcks Create a Secret for profile file usage:\n$ kubectl create secret generic my-aws-credentials \\ --from-file=./my-aws-credentials.profile -n microcks You also have to ensure that this IAM Account has the required permissions for connecting to the service. In order to use SQS service with Microcks, your IAM account will need the AmazonSQSFullAccess policy to create, list and get details on queues but also publish messages to them. In order to use SNS service with Microcks, your IAM account will need the AmazonSNSFullAccess policy to create, list and get details on topics but also publish messages to them.\nIf you have used the Operator based installation of Microcks, you\u0026rsquo;ll need to add some extra properties to your MicrocksInstall custom resource. The fragment below shows the important ones with the 2 alternatives of using a Secret that keys will be injected as environment variables or using a Secret holding a profile file that will be mounted on filesystem:\napiVersion: microcks.github.io/v1alpha1 kind: MicrocksInstall metadata: name: microcks spec: [...] features: async: enabled: true [...] sqs: region: eu-west-3 credentialsType: env-variable credentialsSecretRef: secret: my-aws-credentials #accessKeyIdKey: access_key_id # Allow customization of key #secretAccessKeyKey: secret_access_key # Allow customization of key #sessionTokenKey: secret_token_key # This one is optional #OR credentialsType: profile credentialsProfile: my-sqs-profile credentialsSecretRef: secret: my-aws-credentials fileKey: my-aws-credentials.profile sns: # Same parameters as above for SNS access The async feature should of course be enabled and then the important things to notice are located in to the sqs and sns blocks:\nregion is the region identifier where the Amazon SQS/SNS service you\u0026rsquo;re using are located, creadentialsType allow to specify if you want to use env-variables or a profile file, in the case of profile file being used, you can specify the credentialsProfile you want to use (default to microcks-sqs-admin), credentialsSecretRef is the name of the Secret holding either your IAM account environment variables or profile file. You can configure either secret keys or file key. If you have used the Helm Chart based installation of Microcks, this is the corresponding fragment put in a Values.yml file:\n[...] features: async: enabled: true [...] sqs: region: eu-west-3 credentialsType: env-variable credentialsSecretRef: secret: my-aws-credentials Actual connection to the Google Pub/Sub service will only be made once Microcks will send mock messages to it. Let see below how to use Pub/Sub binding with AsyncAPI.\nRunning AWS on LocalStack? Microcks supports that too! Each configuration section (for SQS and SNS) allows to provide an optional endpointOverride property that will allow you to target your LocalStack instance.\nYou\u0026rsquo;ll end up with something like features.async.sqs.endpointOverride=http://localhost:4566 for example.\n2. Use AWS services in AsyncAPI As SQS and SNS are not the default bindings into Microcks, you should explicitly add then as a valid binding within your AsyncAPI contract. Here is below a fragment of AsyncAPI specification file that shows the important things to notice when planning to use SQS and Microcks with AsyncAPI. It comes for one sample you can find on our GitHub repository.\nasyncapi: \u0026#39;2.1.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] bindings: sqs: queue: name: my-sqs-queue message: [...] payload: [...] We have the exact same sample for SNS here.\nYou\u0026rsquo;ll notice that we just have to add a sqs non empty block within the operation bindings. Just define one property (like queue.name for example) and Microcks will detect this binding has been specified. As of today, the full binding specs for SQS and SNS are not yet defined in AsyncAPI but there\u0026rsquo;s an ongoing effort to push them. As Microcks does not depend on the internal structure of the binding, future changes will not impact your mocks and tests.\nAs usual, as Microcks internal mechanics are based on examples, you will also have to attach examples to your AsyncAPI specification.\nasyncapi: \u0026#39;2.1.0\u0026#39; id: \u0026#39;urn:io.microcks.example.user-signedup\u0026#39; [...] channels: user/signedup: [...] subscribe: [...] message: [...] examples: - laurent: summary: Example for Laurent user headers: |\u0026gt; {\u0026#34;my-app-header\u0026#34;: 23} payload: |\u0026gt; {\u0026#34;id\u0026#34;: \u0026#34;{{randomString(32)}}\u0026#34;, \u0026#34;sendAt\u0026#34;: \u0026#34;{{now()}}\u0026#34;, \u0026#34;fullName\u0026#34;: \u0026#34;Laurent Broudoux\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;[email protected]\u0026#34;, \u0026#34;age\u0026#34;: 41} - john: summary: Example for John Doe user headers: my-app-header: 24 payload: id: \u0026#39;{{randomString(32)}}\u0026#39; sendAt: \u0026#39;{{now()}}\u0026#39; fullName: John Doe email: [email protected] age: 36 If you\u0026rsquo;re now yet accustomed to it, you may wonder what it this {{randomFullName()}} notation? These are just Templating functions that allow generation of dynamic content! 😉\nNow simply import your AsyncAPI file into Microcks either using a Direct upload import or by defining a Importer Job. Both methods are described in this page.\n3. Validate your mocks Now it’s time to validate that mock publication of messages on the targeted SQS Queue or SNS Topic is correct. In a real world scenario this mean developing a consuming script or application that connects to the topic where Microcks is publishing messages.\nThe easiest way of doing things here would be to use the AWS console to get a quick check on what is actually published by Microcks. As soon as you have imported the AsyncAPI spec, Microcks has created a new queue named UsersignedupAPI-0140-user-signedup (depends on API name, version and operation channel) and starts publishing messages on it. If you get on the screen that allows sending and receiving messages, you\u0026rsquo;ll get something like:\nAccessing the details of one of the polled messages will give you a content similar to this one:\n🎉 Fantastic! We are receiving the two different messages corresponding to the two defined examples each and every 3 seconds that is the default publication frequency. You\u0026rsquo;ll notice that each id and sendAt properties have different values thanks to the templating notation.\nAnd for SNS? Checking SNS with the sample we provide will basically provide the same results but on a SNS Topic named UsersignedupAPI-0150-user-signedup. In order to see messages sent to this topic, you\u0026rsquo;ll have to create a subscription to route messages to an endpoint like a SQS Queue. That way, you\u0026rsquo;ll browse messages the same way we did just before.\n4. Run AsyncAPI tests Now the final step is to perform some tests of the validation features in Microcks. Here again, for sake of simplicity, we\u0026rsquo;ll use the AWS console to send test messages to either a SQS Queue or a SNS Topic.\nImagine that you want to validate messages from a QA environment on a specific Amazon region (not necessarily the same as the Microcks instance is connected to for mocking purposes). As the QA resources access is secured, you\u0026rsquo;ll need - like described above in Step 1 - to retrieve an IAM account access key credentials. In order to run tests on SQS service, such an IAM account will require the AmazonSQSReadOnlyAccess to list queues, get queue attributes and read messages. In order to run tests on SNS service, the IAM account will require slightly more permissions being the AmazonSNSFullAccess and the AmazonSQSFullAccess. This is actualy necessary as Microcks will dynamically create temporary SQS queue and SNS subscription in order to performa a test.\nOnce you get the IAM account access key, you will then have to manage a Secret in Microcks to hold these informations. Within Microcks console, first go to the Administration section and the Secrets tab.\nAdministration and Secrets will only be available to people having the administrator role assigned. Please check this documentation for details.\nOn this tab, you\u0026rsquo;ll have to create a Basic Authentication secret with the username being the Access Key Id of your IAM account and the password being its Secret Access Key.\nThe screenshot below illustrates the creation of such a secret for your aws-qa-sqsreaduser with username, and credentials.\nWe can now prepare for a first test! Open the AWS SQS on your region of choice (we\u0026rsquo;ve chosen eu-west-3 in our example below) and create a user-signups standard queue. Go to the Send and receive messages page and prepare to send the following message:\n{\u0026#34;id\u0026#34;:\u0026#34;gm6c39oa69nw7dukbpper\u0026#34;,\u0026#34;sendAt\u0026#34;:\u0026#34;1675848602703\u0026#34;,\u0026#34;fullName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:44} Don\u0026rsquo;t click the Send message now but be prepared!\nWe\u0026rsquo;re now going to launch a New Test within Microcks web console. Use the following elements in the Test form:\nTest Endpoint: sqs://eu-west-3/user-signups that is referencing the Google Pub/Sub service and topic endpoint, Runner: ASYNC API SCHEMA for validating against the AsyncAPI specification of the API, Timeout: Keep the default of 10 seconds, Secret: This is where you\u0026rsquo;ll select the aws-qa-sqsreader you previously created. And for SNS? Well it\u0026rsquo;s basically exactly the same thing with a slight variation in test endpoint syntax. You will have to put there something like sns://eu-west-3/user-signups where user-signups is the name of the SNS Topic your application is using.\nLaunch the test and quickly switch to the AWS console to send a bunch of messages. Wait for some seconds and you should get access to the test results as illustrated below:\nThis is fine and we can see that Microcks captured messages and validated them against the payload schema that is embedded into the AsyncAPI specification. In our sample, every property is required and message does not allow additionalProperties to be define, sendAt is of string type.\nSo now let see what happened if we tweak that a bit\u0026hellip; We\u0026rsquo;re going to re-launch the same test but using JSON below to simulate invalid messages:\n{\u0026#34;id\u0026#34;:\u0026#34;2zzo4kf16mxu5e6k8hyecl\u0026#34;,\u0026#34;sendAt\u0026#34;:1675848602937,\u0026#34;displayName\u0026#34;:\u0026#34;Laurent Broudoux\u0026#34;,\u0026#34;email\u0026#34;:\u0026#34;[email protected]\u0026#34;,\u0026#34;age\u0026#34;:44} Relaunch a new test and you should get results similar to those below:\n🥳 We can see that there\u0026rsquo;s now a failure and that\u0026rsquo;s perfect! What does that mean? It means that when your application or devices are sending garbage, Microcks will be able to spot this and inform you that the expected message format is not respected.\nRunning AWS on LocalStack? Microcks supports that too! You\u0026rsquo;ll just have to add an extra overrideUrl option to your test endpoint URL so that Microcks will target your LocalStack instance.\nYou\u0026rsquo;ll end up with something like sqs://eu-west-3/user-signups?overrideUrl=http://localhost:4566 for example.\nWrap-Up In this guide we have seen how Microcks can also be used to send mock messages on SQS Queue or SNS Topic managed service connected to the Microcks instance. This helps speeding-up the development of application consuming these messages. We finally ended up demonstrating how Microcks can be used to detect any drifting issues between expected message format and the one effectively used by real-life producers.\nThanks for reading and let you know what you think on our Discord chat 🐙\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/gitlab/","title":"Using in GitLab CI","description":"","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/references/artifacts/har-conventions/","title":"Http Archive Conventions","description":"","searchKeyword":"","content":"Conventions In order to be correctly imported and understood by Microcks, your HAR file should follow a little set of reasonable conventions and best practices.\nHAR file doesn\u0026rsquo;t have the notion of API name or version. In Microcks, this notion is critical and we thus we will need to have a specific comment notation to get this information. You\u0026rsquo;ll need to add a comment line starting with microcksId: in your file and then referring the \u0026lt;API name\u0026gt;:\u0026lt;API version\u0026gt;. HAR provides a header log structure that may host such a comment. See an example below: { \u0026#34;log\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;1.2\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;microcksId: API Pastries:0.0.2\u0026#34;, \u0026#34;creator\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;WebInspector\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;537.36\u0026#34; }, [...] } Optionnaly - if the captured traffic is located behind gateways or other stuffs rewriting URLs - you may want to remove API invocations URL prefix to better fit your API definition. You can then add an additional comment line starting with apiPrefix: to specify a part of the path Microcks will remove from found paths. See an example below: { \u0026#34;log\u0026#34;: { \u0026#34;version\u0026#34;: \u0026#34;1.2\u0026#34;, \u0026#34;comment\u0026#34;: \u0026#34;microcksId: API Pastries:0.0.2 \\n apiPrefix: /my/prefix/toRemove\u0026#34;, \u0026#34;creator\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;WebInspector\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;537.36\u0026#34; }, [...] } With this configuration, the apiPrefix is used for 2 purposes by Microcks:\nto filter out the log entries that are not starting by /my/prefix/toRemove, to clean-up the path and find a short operation name. As an example, imagine you\u0026rsquo;ll have a GET https://pastries.acme.org/my/prefix/toRemove/pastries?size=S log entry in the HAR file, this one will be considered as a valid entry because it containtes the apiPrefix after having removed host information and it will be considered as an example for the GET /pastries operation.\nPrimary or Secondary? A HAR file can be imported aither as a primary artifact or as a secondary artifact.\nWhen imported as a primary artifact, Microcks tries to guess the type of API (REST, GRAPHQL or SOAP) looking at the payload of requests. It also tries to find similarities between entries path to deduce operations for your API.\nWhen imported as a secondary artifact - a primary one being an OpenAPI specification, a GraphQL Schema or a SoapUI project - Microcks uses the definition of the API provided by this primary artifact to associate entries with operations accordingly.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/multi-artifacts/","title":"Multi-artifacts support","description":"","searchKeyword":"","content":"Introduction Microcks can have multiple artifacts (one primary and many secondary) mapping to one API definition. The primary one will bring API or Service and operation metadata and examples. The secondary ones will only enrich existing operations with new non-conflicting requests/responses and event samples.\nA typical illustration of this may be using an OpenAPI specification as a primary one and then bringing one (or many) additional Postman collections to provide examples or test constraints.\nIn that case, Microcks is first fed with an OpenAPI file to get the main identification and structure information about the API or Service. This allows Microcks to initialize its internal metamodel for the discovered API. Then, Microcks will load the secondary artifacts and try to merge new non-conflicting information into the preexisting internal metamodel. The merging process is based on a compound key: the API name + version.\nIf not explicitly identified as primary or secondary, the default is to consider an imported artifact as the primary one. Microcks will simply ignore a secondary artifact if it doesn\u0026rsquo;t match any existing API name + version.\n💡 Note that the secondary artifact is not necessarily a Postman Collection. It can also be some other artifacts like HTTP Archive Format (HAR) file, for example. Check our reference on Supported artifacts and conventions.\nUsage for different protocols For specific types of APIs and protocols, loading multiple artifacts for the same API definition may be necessary. Typically, when a single artifact is not able to handle a comprehensive set of examples, we need to rely on secondary artifacts to provide those examples.\nIt is then mandatory to use multiple artifacts in Microcks for GraphQL, gRPC and Swagger v2 defined APIs as the primary artifacts that provide the structure are not able to hold complete examples (yes, even Swagger v2 doesn\u0026rsquo;t allow complete examples 😉)\n💡 Here again, the secondary artifact is not necessarily a Postman Collection just used for illustration purpose. Check our reference on Supported artifacts and conventions.\nAlso, note that multiple artifacts for one API definition don\u0026rsquo;t necessarily involve different specifications and file formats! The merging process in Microcks is generic, so you can use the same format multiple times. For example, you may want to use an OpenAPI specification as a primary one and apply some overlay by managing examples into other OpenAPI files.\nOne specific case of the merging process - that can be used in combination with any other artifact as a primary one - relates to the Microcks APIMetadata format. When importing such artifacts as secondary ones, the merging process involves the metadata of the API or Service and not the examples or tests as illustrated below:\nOpportunities Microcks\u0026rsquo;s multi-artifacts support is a flexible and powerful feature that opens many opportunities for managing your artifacts.\nAn emerging use case is that some people may have a single OpenAPI file containing only base/simple examples but manage complementary/advanced examples using, for example, a Postman Collection.\nOne can extend this base use case to implement some variations:\nDifferent Postman collections for different lifecycle environments, maintained in coordination with reference datasets, Different Postman collections for different API providers implementing a shared industrial standard (think of IoT Fiware implementation but for different industry verticals), Different Postman collections for different API consumers that will allow consumer-driven contract testing. "},{"section":"Documentation","url":"https://microcks.io/documentation/references/apis/","title":"API Reference","description":"Here below all the documentation pages related to **API Reference**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/guides/integration/postman-workspace/","title":"Connecting to Postman Workspaces","description":"","searchKeyword":"","content":"Overview Postman Workspaces are a common and effective way of organizing your team API work. Workspaces allow you to collaborate while designing your API and share your API artifacts like Postman Collections.\nIn this guide, you\u0026rsquo;ll learn how to directly connect Microcks to your Postman Collection living in a Workspace so that changes in Postman may be automatically propagated to Microcks.\n1. Obtain an API Key In order to connect to your Postman Workspace, you\u0026rsquo;ll need an API Access Key so that Microcks will be able to authenticate while fetching your Collection. In order to do that, you\u0026rsquo;ll need to generate an API Key from Postman Workspace as illustrated below:\nThis API Key must then be saved as an authentication Secret in Microcks so that your importer will be able to reference it and supply it to Postman API using the X-API-Key header.\nAs an administrator, create a new Secret using this template and replacing the token with your own value:\n2. Share your API Now you need to retreive the Collection Api linkg. For that, you have to go through the Share button and select the Via API thumbnail as illustrated in the picture below:\n🗒️ You can see that it\u0026rsquo;s also possible to generate a new API key from this step if you have skipped step 1 😉\nCopy this URL that is unique and represents access to your Collection.\n3. Create an Importer Finally, you can then use this URL (ending just before the ?) and use it directly as an Importer URL when creating a Scheduled Importer.\nWrap-up Congrats 🎉 You now know how to connect Microcks to your Postman Workspace in order to get diret access to the Postman Collection shared with your team!\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/dynamic-content/","title":"Dynamic mock content","description":"","searchKeyword":"","content":"Introduction Whilst we deeply think that \u0026ldquo;real-world\u0026rdquo; static values for request/response samples are crucial in order to fully understand the business usages and expectations of an API, we have to admit that it is more than often useful to introduce some kind of dynamically generated content for response.\nThose use-case encompass:\nrandom numbers that may be defined in a range, today\u0026rsquo;s date or today\u0026rsquo;s + an amount of time (for validity date for example), response part expressed from request part (body part, header, query param) Thus, Microcks has some templating features allowing to specify dynamic parts in response content.\nLet\u0026rsquo;s introduce this new feature with an example: a simple Hello API that takes a JSON payload as request payload and that return a Greeting response including: the id of message, the date of message generation and the message content itself that is just saying Hello !.\nYou can find the OpenAPI v3 contract of this API here and here\u0026rsquo;s below the result once imported into Microcks:\nYou\u0026rsquo;ll notice that response payload is expressed using some templating mustaches ({{ and }}) that indicates here that Microcks should recognize the delimited expression and replace it with new values.\nWhen invoked twice with different params at different dates, here are the results:\n$ curl -XGET http://microcks.example.com/rest/Hello+Dynamic+API/1.0.0/hello -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;World\u0026#34;}\u0026#39; -s | jq . { \u0026#34;id\u0026#34;: \u0026#34;pQnDIytzeYJFLxaQg56yObw0WTpYNBMjPYu7FLBoNSGF6ZJsTcHov5ZmaiWG8Gt8\u0026#34;, \u0026#34;date\u0026#34;: \u0026#34;10/02/2020\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Hello World!\u0026#34; } # Wait for a day... $ curl -XGET http://microcks.example.com/rest/Hello+Dynamic+API/1.0.0/hello -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;Laurent\u0026#34;}\u0026#39; -s | jq . { \u0026#34;id\u0026#34;: \u0026#34;Hn9lUKkzYsvQq98wDEHa7Ln3H4eVfnfpJLLPPe4ns9vBgaTRvblOOBHIVq3BluEC\u0026#34;, \u0026#34;date\u0026#34;: \u0026#34;11/02/2020\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Hello Laurent!\u0026#34; } Here we are: 1 sample definition but dynamic content generated on purpose!\nFew concepts Let explain the few concepts behind Microcks templating features. These are really simple and straightforward:\nAn expression should be delimited by mustaches like this: {{ expression }}. This pattern can be included in any textual representation of your response body content: plain text, JSON, XML, whatever\u0026hellip; Microcks will just replace this pattern by its evaluated content or null if evaluation fail for any reason, An expression can be a reference to a context variable. In this case, we use a . notation to tell which property of this variable we refer to. Built-in contextual informations are attached to variable named request so we may use expression like request.body for example, An expression can also be a function evaluation. In this case, we use a () notation to indicate the function name and its arguments. For example we use randomString(64) to evaluate the random string generation function with one arg being 64 (the length of the desired string), An expression may also include \u0026gt; redirect character so that result from a first evaluation is injected as an extra argument on the next function. For example, you may use uuid() \u0026gt; put(myId). So that result from uuid() function is printed out and also injected as second argument of the put() function so that this is will be stored within the myId context variable, Pretty easy. No? 🎉\nYou can check the Mock Templates reference to get full list of available variable and function expressions.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/automation/tekton/","title":"Using in Tekton Pipeline","description":"","searchKeyword":"","content":"Overview This guide shows you how to integrate Microcks into your Tekton Pipelines. Microcks provides 2 Tekton Tasks for interacting with a Microcks instance. They allow you to:\nImport Artifacts in a Microcks instance. If import succeeds is pursuing, if not it fails, Launch a test on a deployed API endpoint. If test succeeds (ie. API endpoint is conformant with API contract in Microcks) the workflow is pursuing, if not it fails. Those 2 tasks are basically a wrapper around the Microcks CLI and are using Service Account.\n1. Import Tasks in your cluster Microcks Tekton Tasks are located in the /tekton folder of the Microcks CLI repository.\nThe microcks-import-task.yaml holds a Tekton Task definition for importing artifacts.\nThe microcks-test-task.yaml holds a Tekton Task definition for launching test.\nBoth tasks require that you first create a Kubernetes Secret named microcks-keycloak-client-secret to hold your Service Account information, here\u0026rsquo;s below a sample of such a Secret using the default provided Service Account information:\nkind: Secret apiVersion: v1 type: Opaque metadata: name: microcks-keycloak-client-secret stringData: clientId: microcks-serviceaccount clientSecret: ab54d329-e435-41ae-a900-ec6b3fe15c54 After having created the above secret, you can import both tasks in your cluster namespace:\nkubectl create -f https://raw.githubusercontent.com/microcks/microcks-cli/master/tekton/microcks-import-task.yaml -n my-namespace kubectl create -f https://raw.githubusercontent.com/microcks/microcks-cli/master/tekton/microcks-test-task.yaml -n my-namespace 2. Use Tasks in a Pipeline Once the tasks are registrated within your cluster namespace, you can integrate them within your Pipeline like this:\napiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: user-registration-tekton-pipeline spec: tasks: - name: deploy-app taskRef: [...] - name: test-openapi-v1 taskRef: name: microcks-test runAfter: - deploy-app params: - name: apiNameAndVersion value: \u0026#34;User registration API:1.0.0\u0026#34; - name: testEndpoint value: http://user-registration.apps.acme.com - name: runner value: OPEN_API_SCHEMA - name: microcksURL value: https://microcks.acme.com/api/ - name: waitFor value: 8sec Here above, your pipleine will first deploy your application and then ask Microcks to execute an OPEN_API_SCHEMA conformance test on the freshly deployed application (supposed to be on the http://user-registration.apps.acme.com endpoint here).\nThe parameters that can be set here are:\nThe apiNameAndVersion to launch tests for: this is simply a service_name:service_version expression, The testEndpoint to test: this is a valid endpoint where your service or API implementation has been deployed, The runner to use: this is the test strategy you may want to have regarding endpoint, The microcksURL to access the Microcks API endpoint, The waitFor that is the specification of a test timeout. 3. Run your Pipeline Pipeline can be executed through a new PipelineRun resource creation or using the tkn CLI tool. This time we\u0026rsquo;re using the CLI tool to start a new pipeline:\n$ tkn pipelinerun start user-registration-tekton-pipeline PipelineRun started: user-registration-tekton-pipeline-run-64xf7 Showing logs... [...] tkn can also be used later to retrieve the logs for the pipeline execution:\n$ tkn pipeline logs user-registration-tekton-pipeline-run-64xf7 -f -n user-registration [...] [test-openapi-v1 : microcks-test] MicrocksClient got status for test \u0026#34;5f76e969dcba620f6d21008d\u0026#34; - success: false, inProgress: true [test-openapi-v1 : microcks-test] MicrocksTester waiting for 2 seconds before checking again or exiting. [test-openapi-v1 : microcks-test] MicrocksClient got status for test \u0026#34;5f76e969dcba620f6d21008d\u0026#34; - success: false, inProgress: true [test-openapi-v1 : microcks-test] MicrocksTester waiting for 2 seconds before checking again or exiting. [test-openapi-v1 : microcks-test] MicrocksClient got status for test \u0026#34;5f76e969dcba620f6d21008d\u0026#34; - success: true, inProgress: false [test-openapi-v1 : microcks-test] Full TestResult details are available here: https://microcks.acme.com/#/tests/5f76e969dcba620f6d21008d [...] Using the OpenShift Pipelines implementation of Tekton, you may easily get all this information at hands within the Developer Console of your OpenShift cluster. Here\u0026rsquo;s belo a capture of our pipeline execution:\nWith the view to access the logs of this execution:\nWrap-up You have learned how to get and use the Microcks Tekton Tasks for your pipeline running on Kubernetes! 🎉\nIf you want to learn more about that, you can check our full Continuous Testing of ALL your APIs demonstration that has been built with the resource from the API Lifecycle repository.\nThe most up-to-date information and reference documentation can be found into the repository README.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/kubernetes-operator/","title":"On Kubernetes with Operator","description":"","searchKeyword":"","content":"TODO\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/conformance-testing/","title":"Conformance testing","description":"","searchKeyword":"","content":"Introduction It is likely you experienced the painful situation of deploying to production only to find out that an API or Services you integrate with has broken its contract. How can we effectively ensure this does not happen?\nAs introduced in Main Concepts, Microcks can be used for Contract conformance testing of API or services being under development. You spend a lot of time describing request/response pairs and matching rules: it would be a shame not to use this sample as test cases once the development is on its way!\nYou find on the internet many different representations of how the different testing techniques relates to one another and should be ideally combine into a robust testing pipeline. At Microcks, we particularly like the Watirmelon representation below. Microcks clearly allows you to realize Automated API Tests and focus more precisely on Contract conformance testing.\nThe purpose of Microcks tests is precisely to check that the Interaction Contract - as represented by an OpenAPI or AsyncAPI specification, a Postman collection or whatever supported Artifact - consumer and producer agreed upon is actually respected by the API provider. In other words: to check that an implementation of the API is conformant to its contract.\n💡 If you want to learn more on this topic and to get into the details on how Microcks is different from other contract-testing or conformance testing solutions, we got you covered! We recommend having a read of this two articles: Microcks and Pact for API contract testing and Different levels of API contract testing with Microcks\nConformance metrics In order to help you getting confidence into your implementations, we developed the Conformance index and Conformance score metrics that you can see on the top right of each API | Service details page:\nThe Conformance index is a kind of grade that estimates how your API contract is actually covered by the samples you\u0026rsquo;ve attached to it. We compute this index based on the number of samples you\u0026rsquo;ve got on each operation, the complexity of dispatching rules of these operation and so on\u0026hellip; It represents the maximum possible conformance score you may achieve if all your tests are successful.\nThe Conformance score is the current score that has been computed during your last test execution. We also added a trend computation if things are going better or worse comparing to your history of tests on this API.\nOnce you have activated labels filtering on your repository and have ran a few tests, Microcks is also able to give you an aggregated view of your API patrimony in termes of Conformance Risks. The tree map below is displayed on the Dashboard page and represents risks in terms of average score per group of APIs (depending on the concept you chose it could be per domain, per application, per team, \u0026hellip;)\nThis visualization allows you to have a clear understanding of your conformance risks at first glance!\nTests history and details /mocks/#mocks-info Tests history for an API/Service is easily accessible from the API | Service summary page. Microcks keep history of all the launched tests on an API/Service version. Success and failures are kept in database with unique identifier and test number to allow you to compare cases of success and failures.\nSpecific test details can be visualized : Microcks also records the request and response pairs exchanged with the tested endpoint so that you\u0026rsquo;ll be able to access payload content as well as header. Failures are tracked and violated assertions messages displayed as shown in the screenshot below :\n"},{"section":"Documentation","url":"https://microcks.io/documentation/guides/usage/async-protocols/","title":"Async Protocols","description":"Here below all the guides pages related to **Async protocols**.","searchKeyword":"","content":""},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/service-account/","title":"Service accounts","description":"","searchKeyword":"","content":"Introduction Microcks is using OpenId Connect and OAuth 2.0 bearer tokens to secure its frontend and API access. While this is very convenient for interactive users, it may be unpracticable for machine-to-machine authentication when you want to interact with Microcks from a robot, CI/CD pipeline or simple CLI tool. For that, we decided to implement the simple OAuth 2.0 Client Credentials Grant in addition of other grants. This authentication is implemented using Service Accounts clients defined into the Realm configuration in Keycloak.\nMicrocks comes with a default account named microcks-serviceaccount that comes with default installation but you are free to create as many account as you may have robots users.\nInspecting default Service Account Let\u0026rsquo;s start inspecting the properties of the default Service Account to check its anatomy 😉 Start connecting as an administrator to the Keycloak instance your Microcks instance is running.\nJust issue the following unauthenticated API call to Microcks to get the Keycloak URL and the name of realm you\u0026rsquo;re using:\n$ curl https://microcks.example.com/api/keycloak/config -s -k | jq . { \u0026#34;realm\u0026#34;: \u0026#34;microcks\u0026#34;, \u0026#34;resource\u0026#34;: \u0026#34;microcks-app-js\u0026#34;, \u0026#34;auth-server-url\u0026#34;: \u0026#34;https://keycloak.microcks.example.com\u0026#34;, \u0026#34;ssl-required\u0026#34;: \u0026#34;external\u0026#34;, \u0026#34;public-client\u0026#34;: true } Authenticate as administrator into the Keycloak administration console and browse the realm Microcks is using. You should have the list of defined Applications or Clients defined on this realm and see the default microcks-serviceaccount as in below screenshot:\nGetting to the details of the Service Account, you can check that it is Enabled, that it should conform to the openid-connect Client Protocol with a confidential Access Type. Finally, it should also be able to do a Direct Access Grant and act as a Service Account. See below the settings of default account:\nSo one crucial thing for Service Account is their associated Credentials because because clients will have to know it for initating the flow. Credetnials are available in the Credentials thumb like shown below:\nFinally, in order to operate correctly, Service Account should have role assigned. The default account comes with the user role defined into the main microcks-app OpenId client that matches to the main Microcks component:\n🚨 If you want to use the Service Account from pipelines in order to perform advanced operations like importing new Artifacts, or triggering scheduled imports, you have to give it more privileges as the default account has just the user role.\nOn the role page in Keycloack, click on the Assign role button, filter roles by clients and pick the microcks-app \u0026gt; manager role.\nUsing Service Account In Microcks, the default microcks-serviceaccount is used by internal components when communicating with the main Microcks webapp that is holding API. So be careful before changing its credentials and do not delete it!\nHowever, you can create as many other Service Accounts as you may have CI/CD pipelines, CLI users or integration with your own solutions.\nAs a sum-up, here\u0026rsquo;s some basic commands showing you how to use this service account once defined:\n# account:credentials should be first encoded as base 64 $ echo \u0026#34;microcks-serviceaccount:ab54d329-e435-41ae-a900-ec6b3fe15c54\u0026#34; | base64 bWljcm9ja3Mtc2VydmljZWFjY291bnQ6YWI1NGQzMjktZTQzNS00MWFlLWE5MDAtZWM2YjNmZTE1YzU0Cg= # then you issue a POST command to authenticate and retrieve an access_token from Keycloak # the grant_type used is client_credentials $ curl -X POST https://keycloak.microcks.example.com/realms/microcks/protocol/openid-connect/token -H \u0026#39;Content-Type: application/x-www-form-urlencoded\u0026#39; -H \u0026#39;Accept: application/json\u0026#39; -H \u0026#39;Authorization: Basic bWljcm9ja3Mtc2VydmljZWFjY291bnQ6YWI1NGQzMjktZTQzNS00MWFlLWE5MDAtZWM2YjNmZTE1YzU0Cg=\u0026#39; -d \u0026#39;grant_type=client_credentials\u0026#39; -k -s | jq . { \u0026#34;access_token\u0026#34;: \u0026#34;eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnTVY5OUNfdHRCcDNnSy0tUklaYkY5TDJUWkdpTWZUSWQwaXNrXzh4TElZIn0.eyJleHAiOjE3MTcwNzA0MTQsImlhdCI6MTcxNzA3MDExNCwianRpIjoiM2YyYWZkMjgtMzQ3Ny00NjJiLWIzYmEtNDljZTE3NGQwMTViIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MTgwL3JlYWxtcy9taWNyb2NrcyIsImF1ZCI6WyJtaWNyb2Nrcy1hcHAiLCJhY2NvdW50Il0sInN1YiI6IjY5OGZhMzM5LTk5NjEtNDA0ZC1iMjUwLTRhMzQ5MzY2ZDQ2ZCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1taWNyb2NrcyJdfSwicmVzb3VyY2VfYWNjZXNzIjp7Im1pY3JvY2tzLWFwcCI6eyJyb2xlcyI6WyJ1c2VyIl19LCJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJjbGllbnRIb3N0IjoiMTcyLjE3LjAuMSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoic2VydmljZS1hY2NvdW50LW1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiY2xpZW50QWRkcmVzcyI6IjE3Mi4xNy4wLjEiLCJjbGllbnRfaWQiOiJtaWNyb2Nrcy1zZXJ2aWNlYWNjb3VudCJ9.FgWaKrZthEEK4pAyd9n8mMxCfErCzXN8l8QUaAI9-VYfwfy1qXAqpqtL8rTtOf4MiAV0P7ntz1firmv6GfaInHD9FMbysXOtp6RVB3Jj0ITNqsR-Guw6lYZIKg5ECtqLw3x5cISaq00VGTIOpZDGVn8GRM-a6XQHvfRJzPqgZXELWIhxCzmBor2Sv8m35E_jNQT-cMNrd7XPdRfFYcYqxQgOmez1N9uHg0kajWJEHKFu1TFaa1HT2vaFB6QgNnJusiEIVEltK7FG42SC1QXH9LmUJC9FK7jRTqJx43VMLOCT4xnwsimVq6vlYr_TCsrCB7HqSZUQqeer9cddRnsfag\u0026#34;, \u0026#34;expires_in\u0026#34;: 300, \u0026#34;refresh_expires_in\u0026#34;: 0, \u0026#34;token_type\u0026#34;: \u0026#34;Bearer\u0026#34;, \u0026#34;not-before-policy\u0026#34;: 0, \u0026#34;scope\u0026#34;: \u0026#34;email profile\u0026#34; } # finally, you can reuse this access_token as the bearer to call Microcks APIs $ curl https://microcks.example.com/api/services -H \u0026#39;Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnTVY5OUNfdHRCcDNnSy0tUklaYkY5TDJUWkdpTWZUSWQwaXNrXzh4TElZIn0.eyJleHAiOjE3MTcwNzA0MTQsImlhdCI6MTcxNzA3MDExNCwianRpIjoiM2YyYWZkMjgtMzQ3Ny00NjJiLWIzYmEtNDljZTE3NGQwMTViIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MTgwL3JlYWxtcy9taWNyb2NrcyIsImF1ZCI6WyJtaWNyb2Nrcy1hcHAiLCJhY2NvdW50Il0sInN1YiI6IjY5OGZhMzM5LTk5NjEtNDA0ZC1iMjUwLTRhMzQ5MzY2ZDQ2ZCIsInR5cCI6IkJlYXJlciIsImF6cCI6Im1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1taWNyb2NrcyJdfSwicmVzb3VyY2VfYWNjZXNzIjp7Im1pY3JvY2tzLWFwcCI6eyJyb2xlcyI6WyJ1c2VyIl19LCJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJjbGllbnRIb3N0IjoiMTcyLjE3LjAuMSIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwicHJlZmVycmVkX3VzZXJuYW1lIjoic2VydmljZS1hY2NvdW50LW1pY3JvY2tzLXNlcnZpY2VhY2NvdW50IiwiY2xpZW50QWRkcmVzcyI6IjE3Mi4xNy4wLjEiLCJjbGllbnRfaWQiOiJtaWNyb2Nrcy1zZXJ2aWNlYWNjb3VudCJ9.FgWaKrZthEEK4pAyd9n8mMxCfErCzXN8l8QUaAI9-VYfwfy1qXAqpqtL8rTtOf4MiAV0P7ntz1firmv6GfaInHD9FMbysXOtp6RVB3Jj0ITNqsR-Guw6lYZIKg5ECtqLw3x5cISaq00VGTIOpZDGVn8GRM-a6XQHvfRJzPqgZXELWIhxCzmBor2Sv8m35E_jNQT-cMNrd7XPdRfFYcYqxQgOmez1N9uHg0kajWJEHKFu1TFaa1HT2vaFB6QgNnJusiEIVEltK7FG42SC1QXH9LmUJC9FK7jRTqJx43VMLOCT4xnwsimVq6vlYr_TCsrCB7HqSZUQqeer9cddRnsfag\u0026#39; -k -s | jq . To finally get the result of an API call:\n[ { \u0026#34;id\u0026#34;: \u0026#34;65fc52b9512f6013cb7e9781\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;API Pastry - 2.0\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;2.0.0\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;REST\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;createdOn\u0026#34;: 1711035065536, \u0026#34;lastUpdate\u0026#34;: 1714377633653, \u0026#34;labels\u0026#34;: { \u0026#34;domain\u0026#34;: \u0026#34;pastry\u0026#34; } }, \u0026#34;sourceArtifact\u0026#34;: \u0026#34;https://raw.githubusercontent.com/microcks/microcks/master/samples/APIPastry-openapi.yaml\u0026#34;, \u0026#34;operations\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;GET /pastry\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry\u0026#34; ] }, { \u0026#34;name\u0026#34;: \u0026#34;GET /pastry/{name}\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;defaultDelay\u0026#34;: 0, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry/Eclair%20Cafe\u0026#34;, \u0026#34;/pastry/Millefeuille\u0026#34; ], \u0026#34;parameterConstraints\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;TraceID\u0026#34;, \u0026#34;in\u0026#34;: \u0026#34;header\u0026#34;, \u0026#34;required\u0026#34;: false, \u0026#34;recopy\u0026#34;: true } ] }, { \u0026#34;name\u0026#34;: \u0026#34;PATCH /pastry/{name}\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;PATCH\u0026#34;, \u0026#34;dispatcher\u0026#34;: \u0026#34;URI_PARTS\u0026#34;, \u0026#34;dispatcherRules\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;resourcePaths\u0026#34;: [ \u0026#34;/pastry/Eclair%20Cafe\u0026#34; ] } ] } ] "},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/dispatching/","title":"Dispatcher & dispatching rules","description":"","searchKeyword":"","content":"Introduction In order to provide smart mocks, Microcks is using Dispatcher and Dispatching Rules to find the most appropriate response to return when receiving a request.\nThe Dispatcher is defining a routing logic for mocks, that specifies the kind of elements of an incoming request will be examined to find a match. The Dispatcher Rules refines those elements as well as the matching rule to find the correct response.\nBy default, Microcks looks at the variable parts between the different examples of the same operation when importing a new Service or API and infers those two elements. Then, based on those elements, it computes some fingerprint that allows unique identification for every request/response pair. That\u0026rsquo;s what we called the Dispatch Criteria.\nWhen using this default and receiving an incoming request on a mock endpoint, Microcks will re-apply the Service or API Dispatching Rules to compute the fingerprint again and find the appropriate response matching this criteria.\nHowever, you may need more than this inferred logic in some situations. Microcks got you covered! It allows you to configure and use advanced dispatchers and associated rules, providing some advanced dispatchers to implement your own business rules or constraints.\nInferred dispatchers As a reminder on default, inferred dispatchers: you may find URI_PARTS, URI_PARAMS, URI_ELEMENTS, QUERY_ARGS, QUERY_MATCH or SCRIPT. The first three are usually found when using Postman or OpenAPI as a contract artifact ; they are deduced from the paths and contract elements. The last two are usually found when using SoapUI as a contract artifact.\nHere are below some explanations on these dispatchers and associated dispatching rules syntax:\nDispatcher Explanations Rules syntax URI_PARTS Inferred when a Service or API operation has only path parameters Path variables name separated by a \u0026amp;\u0026amp;. Example: for a /blog/post/{year}/{month} path, rule is year \u0026amp;\u0026amp; month URI_PARAMS Inferred when a Service or API operation has only query parameters Query variables name separated by a \u0026amp;\u0026amp;. Example: for a /search?status={s}\u0026amp;query={q} operation, rule is status \u0026amp;\u0026amp; query URI_ELEMENTS Inferred when a Service or API operation has both path and query parameters Path variables name separated by a \u0026amp;\u0026amp; then ?? followed by query variables name separated by a \u0026amp;\u0026amp;. Example: for a /v2/pet/{petId}?user_key={k}, rule is petId ?? user_key QUERY_ARGS Infered when a GraphQL API or gRPC service operation has only primitive types arguments Variables name separated by a \u0026amp;\u0026amp;. Example: for a GraphQL mutation mutation AddStars($filmId: String, $number: Int) {...}, rule is filmId \u0026amp;\u0026amp; number QUERY_MATCH Extracted from SoapUI project. Defines a XPath matching evaluation: extracted result from input query should match a response name Example: for a Hello SOAP Service that extracts the sayHello element value for find a greeting rule is declare namespace ser='http://www.example.com/hello'; //ser:sayHelloResponse/sayHello. XPath functions can also be used here for evaluation - eg. something like: concat(//ser:sayHello/title/text(),' ',//ser:sayHello/name/text()) SCRIPT Extracted from SoapUI project. Defines a Groovy script evaluation: result of type Sring should match a response name See below section on script dispatcher. Dispatching rules override Changing Dispatching Rules or even the Dispatcher can be done by different ways:\nVia the web UI, selecting Edit Properties of the operation from the 3-dots menu on the right of the operation name. You should be logged as a repository manager to have this option (see Managing Users how-to guide if needed), Via Microcks\u0026rsquo; owns API after being connected to Microcks API, Via an additional API Metadata artifact that allow this customization, Via Microcks OpenAPI extensions or AsyncAPI extensions that allow this customization as well. Advanced dispatchers and rules QUERY HEADER dispatcher Since Microcks 1.11, the QUERY_HEADER dispatching strategy is available on REST mocks and allows specifying one or many request headers as the criterion to find a matching response.\nIf you want to use it, you can just specify a dispatching rule where header names are separated by a \u0026amp;\u0026amp;. An example rule can be x-api-key \u0026amp;\u0026amp; x-tenant - in this case, Microcks will use both request headers values to find a matching response.\nJSON BODY dispatcher The JSON_BODY dispatching strategy allows specifying a dispatching rule that will analyse the request payload in order to find a matching response. In order to specify such an expression you can use the help vertical right section of the page that will provide examples and copy/paste shortcuts.\nThe dispatching rules of JSON_BODY dispatcher are always expressed using a JSON payload with 3 properties:\nexp is the expression to evaluate against the request body. It is indeed a JSON Pointer expression. We already use this expression language in Templating features. From the evaluation of this expression, we\u0026rsquo;ll get a value. Here /country denotes the country field of incoming request. op is the operator to apply. Different operators are available like equals, range, regexp, size and presence, cases are a number of cases where keys are values to compare to extracted value from incoming request. Depending on the operator applied, the cases may have different specification formats.\nOperator Cases syntax Comments equals \u0026quot;\u0026lt;value\u0026gt;\u0026quot;: \u0026quot;\u0026lt;response\u0026gt;\u0026quot; A case named default is used as default option range [\u0026lt;min\u0026gt;;\u0026lt;max\u0026gt;]: \u0026quot;\u0026lt;response\u0026gt;\u0026quot; Bracket side matter: [ means incluse, ] means exclusive for a left bracket. A case named default is used as default option size \u0026quot;[\u0026lt;min\u0026gt;;\u0026lt;max\u0026gt;]\u0026quot;: \u0026quot;\u0026lt;response\u0026gt;\u0026quot; Size of an array property. Brackets must be inclusive. A case named default is used as default option regexp \u0026quot;\u0026lt;posix regexp\u0026gt;\u0026quot;: \u0026quot;\u0026lt;response\u0026gt;\u0026quot; Regular expression applied to value. A case named default is used as default option presence \u0026quot;found\u0026quot;: \u0026quot;\u0026lt;response\u0026gt;\u0026quot; Check the presence/absence of a property. 2 mandatory cases: found and default Say we\u0026rsquo;ve got this Beer API allowing to record a new beer in our own catalog. We have a POST method that allows to create new beer resources and we want to make the difference between 2 cases: the Accepted and the Not accepted responses. So we have to start describing the 2 examples into our API contract. You\u0026rsquo;ll notice in the capture below that:\nDispatcher and Dispatching Rules are empty. That means that you\u0026rsquo;ll get the first found response when invoking the mock, no matter the request. We have used Templating features to make the response content more dynamic. So the {{ }} notation within response body. Our business constraints here is to only accept beers coming from Belgium 🇧🇪, otherwise we have to return the Not accepted response. We may edit our dispatching rule to use the equals operator and save, and we can check this rule is applied to our operation. This override of rule will be persisted into Microcks and will survive future discoveries and refreshed of this API version.\n💡 We recommend having an in-depth look at the exemple provided on the page to fully understand the power of different options.\nIllustration Given the templated responses and the above dispatching rule evaluating the body of incoming requests, we can now test our mock.\nLet start by creating a new beer coming from Belgium:\n$ curl -X POST http://microcks.example.com/rest/Beer+Catalog+API/1.0/beer \\ -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;Abbey Brune\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;Belgium\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Brown ale\u0026#34;, \u0026#34;rating\u0026#34;: 4.2, \u0026#34;references\u0026#34;: [ { \u0026#34;referenceId\u0026#34;: 1234 }, { \u0026#34;referenceId\u0026#34;: 5678 } ]}\u0026#39; { \u0026#34;name\u0026#34;: \u0026#34;Abbey Brune\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;Belgium\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Brown ale\u0026#34;, \u0026#34;rating\u0026#34;: 4.2, \u0026#34;references\u0026#34;: [ { \u0026#34;referenceId\u0026#34;: 1234 }, { \u0026#34;referenceId\u0026#34;: 5678 } ] } It is a success as the country has the Belgium value and the Accepted response is returned. Templates in this response are evaluated regarding request content.\nNow let\u0026rsquo;s try with a German beer\u0026hellip; You\u0026rsquo;ll see that the Not accepted response is matched (look also at the return code) and adapted regarding incoming request:\n$ curl -X POST http://microcks.example.com/rest/Beer+Catalog+API/1.0/beer \\ -H \u0026#39;Content-type: application/json\u0026#39; \\ -d \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;Spaten Oktoberfiest\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;Germany\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Amber\u0026#34;, \u0026#34;rating\u0026#34;: 2.8, \u0026#34;references\u0026#34;: []}\u0026#39; \u0026lt; HTTP/1.1 406 { \u0026#34;error\u0026#34;: \u0026#34;Not accepted\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Germany origin country is forbiden\u0026#34; } FALLBACK dispatcher Another useful advanced dispatching strategy introduced in the Advanced Dispatching and Constraints for mocks blog post, is the FALLBACK strategy. As you may have guessed by its name, it behaves like a try-catch wrapping block in programming: it will try applying a first dispatcher with its own rule and if it find nothings it will default to a fallback response. This will allow you to define a default response event of the incoming requests does not match any dispatching criteria.\nThe dispatching rules of FALLBACK dispatcher are expressed using a JSON payload with 3 properties:\ndispatcher is the original dispatching strategy you want to be applied at first. Valid values are all the other dispatching strategies, dispatcherRules are the rules you want the original dispatcher to apply when looking for a response, fallback is simply the name of the response to use as the fallback if nothing is found on first try. Here\u0026rsquo;s below the sample that was introduced in afore mentioned blog post. In case of unknown region requested as a query parameters on a Weather Forecast API, we\u0026rsquo;ll fallback to an unknown response providing meaningful error message:\nIllustration Just issue a Http request with an unmanaged region like below:\n$ curl \u0026#39;https://microcks.apps.example.com/rest/WeatherForecast+API/1.0.0/forecast?region=center\u0026amp;apiKey=qwertyuiop\u0026#39; -k Region is unknown. Choose in north, west, east or south.% PROXY dispatcher PROXY dispatcher was released in Microcks 1.9.1 and introduced in this blog post. As you may have guessed, this dispatcher simply changes the base URL of the Microcks and makes a call to a real backend service.\nWhen using PROXY as a dispatcher, the dispatcherRules should just be set to the base URL of the target backend service.\nPROXY FALLBACK dispatcher The advanced PROXY_FALLBACK dispatcher works similarly to the FALLBACK dispatcher, but with one key difference: when no matching response is found within the Microcks’ dataset, instead of returning a fallback response, it changes the base URL of the request and makes a call to the real service.\nThe dispatching rules of PROXY_FALLBACK dispatcher are expressed using a JSON payload with 3 properties:\ndispatcher is the original dispatching strategy you want to be applied at first. Valid values are all the other dispatching strategies, dispatcherRules are the rules you want the original dispatcher to apply when looking for a response, proxyUrl must be set to the base URL of the target backend service. SCRIPT dispatcher SCRIPT dispatchers are the most versatile and powerful to integrate custom dispatching logic in Microcks. When using such a Dispatcher, Dispatching Rule is simply a Groovy script that is evaluated and has to return the name of mock response.\nBefore actualy evaluating the script, Microcks builds a runtime context where elements from incoming requests are made available. Therefore, you may have access to different objects from the script.\nObject Description mockRequest Wrapper around incoming request that fullfill the contract of Soap UI mockRequest. Allows you to access body payload with requestContent, request headers with getRequestHeaders() or all others request elements with getRequest() that accesses underlying Java HttpServletRequest object requestContext Allows you to access a request scoped context for storing any kind of objects. Such context elements can be later reused when producing response content from templates log Access to a logger with commons methods like debug(), info(), warn() or error(). Useful for troubleshooting. store Allows you to access a service scoped persistent store for string values. Such store elements can be later reused in other operation\u0026rsquo;s script to keep track of state or feed the requestContext. Store provides helpful methods like put(key, value), get(key) or delete(key). Store elements are subject to a Time-To-Live that is 10 seconds by default. This TTL can be overriden using the put(key, value, ttlInSeconds) method. Common use-cases Dispatch according a header value:\ndef headers = mockRequest.getRequestHeaders() log.info(\u0026#34;headers: \u0026#34; + headers) if (headers.hasValues(\u0026#34;testcase\u0026#34;)) { def testCase = headers.get(\u0026#34;testcase\u0026#34;, \u0026#34;null\u0026#34;) switch(testCase) { case \u0026#34;1\u0026#34;: return \u0026#34;amount negativo\u0026#34;; case \u0026#34;2\u0026#34;: return \u0026#34;amount nullo\u0026#34;; case \u0026#34;3\u0026#34;: return \u0026#34;amount positivo\u0026#34;; case \u0026#34;4\u0026#34;: return \u0026#34;amount standard\u0026#34;; } } return \u0026#34;amount standard\u0026#34; Analyse XML body payload content:\nimport com.eviware.soapui.support.XmlHolder def holder = new XmlHolder( mockRequest.requestContent ) def name = holder[\u0026#34;//name\u0026#34;] if (name == \u0026#34;Andrew\u0026#34;){ return \u0026#34;Andrew Response\u0026#34; } else if (name == \u0026#34;Karla\u0026#34;){ return \u0026#34;Karla Response\u0026#34; } else { return \u0026#34;World Response\u0026#34; } Analyse JSON body payload content and setting context:\nlog.info(\u0026#34;request content: \u0026#34; + mockRequest.requestContent); def json = new groovy.json.JsonSlurper().parseText(mockRequest.requestContent); if (json.cars.Peugeot != null) { requestContext.brand = \u0026#34;Peugeot\u0026#34;; log.info(\u0026#34;Got Peugeot\u0026#34;); } if (json.cars.Volvo != null) { requestContext.brand = \u0026#34;Volvo\u0026#34;; log.info(\u0026#34;Got Volvo\u0026#34;); } return \u0026#34;Default\u0026#34; Calling an external API (here the invocations metrics from Microcks in fact 😉) to use external information in dispatching logic:\ndef invJson = new URL(\u0026#34;http://127.0.0.1:8080/api/metrics/invocations/OneApp%20Home/1.0.0\u0026#34;).getText(); def inv = new groovy.json.JsonSlurper().parseText(invJson).dailyCount log.info(\u0026#34;daily invocation: \u0026#34; + inv) [...] Persist, read and delete information from the service-scoped persistent store:\ndef foo = store.get(\u0026#34;foo\u0026#34;); def bar = store.put(\u0026#34;bar\u0026#34;, \u0026#34;barValue\u0026#34;); store.delete(\u0026#34;baz\u0026#34;); "},{"section":"Documentation","url":"https://microcks.io/documentation/guides/installation/externals/","title":"Adding external dependencies","description":"","searchKeyword":"","content":"Overview This guide is a walkthrough, that exposes Microcks extension capabilities and explain how to leverage them. By the end of this tour, you should be able to apply your customizations and figure out the possibilities it offers.\n💡 This guide is actually an adaptation of the excellent CNAM\u0026rsquo;s blog post here: Extend Microcks with custom libs and code that provides comprehensive samples on how to apply the below principles.\nThis guide is organized in 3 different steps you\u0026rsquo;ll have to follow to test and produce a robust extended version of Microcks:\nIdentify the extension use-case and the component you\u0026rsquo;ll need to extend, Locally extend and test your additions of container image, Build a final custom image embedding your additons for easy distribution. Let\u0026rsquo;s jump in! 🪂\n1. Identify use-cases At time of writing, there are 2 extension points may be used to extend the built-in features of Microcks:\n1️⃣ The SCRIPT dispatcher that runs Groovy scripts may need additional dependencies, allowing you to easily reuse your own or third-party libraries across all your mocks. Think about:\nParsing and analyzing some custom headers or message envelopes, Gathering external data to enrich your response with dynamic content, Reusing rich datasets or decision engines for smarter responses, Applying custom security validation. 2️⃣ The Async Minion component can require additional security mechanism customization when accessing external brokers like Kafka or supporting different JMS implementations.\nBased on your knowledge of Microcks Architecture and Deployment Options, you may have guessed that use-cases:\nfrom 1️⃣ will require extending the main WebApp component; whereas use-cases from 2️⃣ will require extending the Async Minion component. 2. Locally extend container images The first step is very convenient when you’re having a local evaluation of Microcks using the Docker Compose installation. A local lib folder can be simply mounted within the image /deployments/lib directory and additional JAVA_* environment variables are set to load all the JARs found at this location.\n🗒️ It\u0026rsquo;s worth noting that even if we mentioned Docker Compose above, the solution is similar for Podman Compose.\nFor Webapp component Put your Jar files into a dedicated folder (i.e. ./lib) Add the following lines into your compose file for the Microcks container: volumes: - ./lib:/deployments/lib environment: - JAVA_OPTIONS=-Dloader.path=/deployments/lib - JAVA_MAIN_CLASS=org.springframework.boot.loader.launch.PropertiesLauncher - JAVA_APP_JAR=app.jar Restart and see the Jar files appended to the application classpath. You can directly use the Java or Groovy classes from your Jar in a SCRIPT For Async Minion component The things are very similar here excepted that the mount point in the Async Minion container is /deployments/lib-ext (/deployments/lib is used for internal purpose).\nvolumes: - \u0026#34;./config:/deployments/config\u0026#34; - \u0026#34;./lib:/deployments/lib-ext\u0026#34; environment: - QUARKUS_PROFILE=docker-compose - JAVA_CLASSPATH=/deployments/*:/deployments/lib/*:/deployments/lib-ext/* 3. Build custom container images Once happy with your library integration test, the next natural step would be to package everything as a custom immutable container image. That way, you can safely deploy it to your Kubernetes environments or even provide it to your developers using Microcks via our Testcontainers module.\nFor Webapp component For this, start writing this simple Dockerfile, extending the Microcks official image:\nFROM quay.io/microcks/microcks:latest # Copy libraries jar files COPY lib /deployments/lib ENV JAVA_OPTIONS=-Dloader.path=/deployments/lib ENV JAVA_MAIN_CLASS=org.springframework.boot.loader.launch.PropertiesLauncher ENV JAVA_APP_JAR=app.jar 💡 In a real Enterprise environment, it would be better to directly fetch the versioned library from an Enterprise Artifact repository like a Maven-compatible one. This would allow you to have reproducible builds of your custom image. It’s usually just a matter of adding a curl command to your Dockerfile:\nRUN curl -f \u0026#34;${REPOSITORY_URL}\u0026#34;/${libname}/${version}/${libname}-${version}.jar -o ${LIBDIR}/${libname}-${version}.jar For Async Minion component For this, start writing this simple Dockerfile, extending the Microcks Async Minoing official image. Notice that here, we can reuse the /deployments/lib location as we’re not going to replace existing libs but augment them with our own ones:\nFROM quay.io/microcks/microcks-async-minion:latest # Copy libraries jar files COPY lib /deployments/lib ENV JAVA_CLASSPATH=/deployments/*:/deployments/lib/* We have set the JAVA_CLASSPATH to force the discovery of the new JAR files.\nWrap-up With this guide, you’ve learned how to integrate private or third-party Java libraries to customize the behavior of Microcks during mock invocation or when integrating with external brokers. 🎉\nThese capabilities pave the way for advanced use cases like the processing of common message structures or the dynamic enrichments of datasets to produce the smartest mocks.\n"},{"section":"Documentation","url":"https://microcks.io/documentation/explanations/monitoring/","title":"Monitoring & Observability","description":"","searchKeyword":"","content":"Introduction As a cloud-native application, we take great care of providing observability on what\u0026rsquo;s going on within a Microcks instance. We dissociate two kinds of metrics: the Functional metrics are related to all the domain objects you may find in Microcks and the Technical metrics that are related to resource consumption and performance.\nFunctional metrics Microcks provides functional metrics directly from within its own REST API. This API will give you visibility on how you use the platform to invoke mocks, execute tests and enhance or degrade quality metrics. The endpoints of the API are returning JSON data.\nThree categories of endpoints are available:\n/api/metrics/conformance/* for querying/collecting the metrics related to the Test Conformance of the API | Services of your repository - see Conformance metrics, /api/metrics/invocations/* for querying the metrics related to mocks invocations (daily/hourly invocations, by API | Service or aggregated), /api/metrics/tests/* for aggregated metrics on tests executed on the platform Have a look at Connecting Microcks API and REST API reference reference to get detailed on how to use those endpoints.\nTechnical metrics For Technical metrics, Microcks components expose Prometheus endpoints that can be scraped to collect technical metrics. That way you ca neasily integrate Microcks monitoring into any modern monitoring stack with Alert Manager or Grafana.\nTwo different endpoints are available:\n/actuator/prometheus path for the main webapp component, /q/metrics path for async-minion component From those endpoints, you will be able to collect resource consumption or perfromance metrics such as: JVM memory used, JVM thread pools, HTTP endpoints performance, Database queries performance and so on.\nOpenTelemetry support Starting with Microcks 1.9.0, the main webapp component now supports OpenTelemetry instrumention for logs, distributed tracing and metrics.\nOpenTelemetry is disabled by default and must be enabled using two different environment variables:\nOTEL_JAVAAGENT_ENABLED is set to false by default, so you\u0026rsquo;ll have to explicitly set it to true OTEL_EXPORTER_OTLP_ENDPOINT is set to a local dummy endpoint, so you\u0026rsquo;ll have to set it to an OpenTelemetry collector endpoint of your environment. Something like http://otel-collector.acme.com:4317 for example. Check the dedicated README on GitHub to get more details.\nGrafana dashboard Starting with Microcks 1.9.0, we also provide a Grafana dashbaord that allows you to easily track the performance and health status of your Microcks instance.\nThis dashboard is using data coming from a Prometheus source so you don\u0026rsquo;t have to enabled the full OpenTelemetry support to use it. Standard Prometheus endpoints scraped by your Prom instance will do the job.\nCheck the dedicated /dashbaords on GitHub to get more details.\nBenchmark suite Starting with Microcks 1.9.0, we also provide a benchmark suite as an easy way of validating/sizing/evaluating changes on your Microcks instance. It allows you to simulate Virtual Users on different usage scenarios and gather performance metrics of your instance.\nCheck the dedicated README on GitHub to get more details.\n"},{"section":"","url":"https://microcks.io/events/","title":"Events & Past Recordings","description":"","searchKeyword":"","content":"\r🗓️ Events We actively participate in events worldwide and announce our presence on social networks, especially LinkedIn, so follow us (https://www.linkedin.com/company/microcks) to stay connected with the Microcks community. If you\u0026rsquo;re an event organizer or CNCF ambassador, feel free to invite us for talks, demos, or workshops—we’re passionate about learning and sharing!\n⏯️ Past Recordings Explore Microcks\u0026rsquo; archive of past event recordings, podcasts, articles, and informative resources!\nRemember to follow and subscribe to the Microcks YouTube channel 👇\nhttps://www.youtube.com/@Microcks/videos\nMust-(re)watch for tech enthusiasts, developers, and anyone interested in streamlining API delivery processes.\nKubecon NA Salt Lake City 2024, Streamlining Cloud-Native Development: Simplifying Dependencies and Testing with Microcks. Live stream recording with Josh Long, Microcks Reloaded: Spring AOT and Testcontainers support. Live stream recording with Josh Long, Microcks: Open source Kubernetes Native tool for API Mocking and Testing 👀 GraphQL conf 2023, Increase Your Productivity With No Code GraphQL Mocking. Devoxx France 2023, Accélérer vos livraisons d\u0026rsquo;API avec Microcks. Devoxx Belgium 2023, Speed Up your API delivery with Microcks. Demo from Dale Lane: (Chief Architect, IBM Event Automation), Using AsyncAPI to generate a mock stream of Kafka events with Microcks. Quarkus Insights #148, Microcks in Quarkus. (🇫🇷) Paris JUG, meetup Fevrier 2024. AsyncAPI Conf India 2023, Elevating Event-Driven Architecture: Boost your delivery with AsyncAPI and Microcks. AsyncAPI Conf 2022, AsyncAPI Recipes for EDA Gourmet. AsyncAPI Conf 2021, AsyncAPI or CloudEvents? Both my Captain! OpenShift Coffee Break, Microcks: API testing into a microservices world. (🇫🇷) CloudNord Octobre 2021, Accélérer votre adoption EDA avec AsyncAPI \u0026amp; Microcks. Barcelona JUG April 2021, Web API Contract First: design, mock and test. Apidays Paris 2020, Speed-Up Kafka delivery with AsyncAPI \u0026amp; Microcks. (🇫🇷) OpenShift Meetup Février 2020, Accélérer votre initiative OpenBanking APIs avec Microcks. (🇫🇷) Devoxx France 2019, Une API, de l\u0026rsquo;idée à la production, en mode agile avec Red Hat! Podcasts 👂 (🇫🇷) Artisan Développeur, #7.x – Tester son API avec Microcks.\nhttps://podcastaddict.com/episode/135165602 🎤\n(🇫🇷) Electro Monkeys, #81 – Testez et mockez vos API grâce à Microcks.\nhttps://electro-monkeys.fr/81-testez-et-mockez-vos-api-grace-a-microcks-avec-laurent-broudoux/ 🎤\nArticles 📖 Moving to Microcks — integrating it into your development flow\nhttps://medium.com/@lbroudoux/moving-to-microcks-integrating-it-into-your-development-flow-53a856bf2e90\nBoost your API mocking workflow with Ollama and Microcks\nhttps://medium.com/itnext/boost-your-api-mocking-workflow-with-ollama-and-microcks-38e25fe78450\nMocking and contract-testing in your Inner Loop with Microcks - Part 1: Easy environment setup\nhttps://itnext.io/mocking-and-contract-testing-in-your-inner-loop-with-microcks-part-1-easy-environment-setup-dcd0f4355231\nMocking and contract-testing in your Inner Loop with Microcks - Part 2: Unit testing with Testcontainers\nhttps://itnext.io/mocking-and-contract-testing-in-your-inner-loop-with-microcks-part-2-unit-testing-with-860a86cb4b4c\nMocking and contract-testing in your Inner Loop with Microcks - Part 3: Quarkus Devservice FTW\nhttps://itnext.io/mocking-and-contract-testing-in-your-inner-loop-with-microcks-part-3-quarkus-devservice-ftw-a14b807737be\nHow Microcks fit and unify Inner and Outer Loops for cloud-native development\nhttps://www.linkedin.com/pulse/how-microcks-fit-unify-inner-outer-loops-cloud-native-kheddache/\nDifferent levels of API contract testing with Microcks\nhttps://medium.com/@lbroudoux/different-levels-of-api-contract-testing-with-microcks-ccc0847f8c97\nMicrocks and Pact for API contract testing\nhttps://medium.com/@lbroudoux/microcks-and-pact-for-api-contract-testing-3e0e7d4516ca\nMocking Microservices Made Easy with Microcks\nhttps://blog.openshift.com/mocking-microservices-made-easy-microcks/\nFull API lifecycle management: A primer\nhttps://developers.redhat.com/blog/2019/02/25/full-api-lifecycle-management-a-primer/\nAn API Journey: From Idea to Deployment the Agile Way, Part 1\nhttps://developers.redhat.com/blog/2018/04/11/api-journey-idea-deployment-agile-part1/\nAn API Journey: From Idea to Deployment the Agile Way, Part 2\nhttps://developers.redhat.com/blog/2018/04/19/api-journey-idea-deployment-agile-way-part2/\nAn API Journey: From Idea to Deployment the Agile Way, Part 3\nhttps://developers.redhat.com/blog/2018/04/26/api-journey-idea-deployment-agile-way-part3/\nWorkshops An API Journey, from Mock to Deployment!\n“Day in the Life” workshop\nhttps://github.com/RedHatWorkshops/dayinthelife-integration "},{"section":"","url":"https://microcks.io/go-client/","title":"Go client","description":"","searchKeyword":"","content":""},{"section":"","url":"https://microcks.io/resources/","title":"Media resources","description":"","searchKeyword":"","content":"\rPlease feel free to borrow these! Here are the offcial Microcks logos registered by the CNCF.\nThe Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see Trademark Usage. Microcks is a Cloud Native Computing Sandbox project 🚀\nSimple logo, blue and light variants with transparent background:\nSimple logo with name stacked, blue and light variants with transparent background:\nHorizontal logo with name, blue and light variants with transparent background:\nHorizontal logo with baseline, blue and light variants with transparent background:\nHorizontal logo with color baseline and tweeter handle, blue and light variants with transparent background:\nPlease be kind! Do\u0026rsquo;s ✅ Use the Microcks logo to link to microcks.io\n✅ Use the Microcks logo to advertise that your product has support for Microcks\n✅ Use the Microcks logo in a blog post or news article about Microcks\nDon\u0026rsquo;ts ❌ Use the Microcks logo for your application’s icon\n❌ Create a modified version of the Microcks logo\n❌ Integrate the Microcks logo into your logo\n❌ Change the Microcks logo\u0026rsquo;s colors or aspect ratio\n"},{"section":"","url":"https://microcks.io/testcontainers-go/","title":"Go Testcontainers module","description":"","searchKeyword":"","content":""}]