Skip to content

Commit

Permalink
Merge pull request #609 from NetAppDocs/image-tags
Browse files Browse the repository at this point in the history
Use correct image tag macro
  • Loading branch information
kevin-hoke authored Jul 18, 2024
2 parents 259a9e4 + 5a5ea39 commit 7a86994
Show file tree
Hide file tree
Showing 266 changed files with 1,949 additions and 1,949 deletions.
16 changes: 8 additions & 8 deletions _include/a-w-n_overview_astra_cc_install_manual.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -140,27 +140,27 @@ secret/astra-registry-cred created
. Select Administrator from the Perspective drop down.
. Navigate to Operators > OperatorHub and search for Astra.
+
image:redhat_openshift_image45.JPG[OpenShift Operator Hub]
image::redhat_openshift_image45.JPG[OpenShift Operator Hub]
. Select the `netapp-acc-operator` tile and click Install.
+
image:redhat_openshift_image123.jpg[ACC operator tile]
image::redhat_openshift_image123.jpg[ACC operator tile]
. On the Install Operator screen, accept all default parameters and click Install.
+
image:redhat_openshift_image124.jpg[ACC operator details]
image::redhat_openshift_image124.jpg[ACC operator details]
. Wait for the operator installation to complete.
+
image:redhat_openshift_image125.jpg[ACC operator wait for install]
image::redhat_openshift_image125.jpg[ACC operator wait for install]
. Once the operator installation succeeds, navigate to click View Operator.
+
image:redhat_openshift_image126.jpg[ACC operator install complete]
image::redhat_openshift_image126.jpg[ACC operator install complete]
. Then click Create Instance in the Astra Control Center tile in the operator.
+
image:redhat_openshift_image127.jpg[Create ACC instance]
image::redhat_openshift_image127.jpg[Create ACC instance]
. Fill the `Create AstraControlCenter` form fields and click Create.
.. Optionally edit the Astra Control Center instance name.
Expand All @@ -175,6 +175,6 @@ image:redhat_openshift_image127.jpg[Create ACC instance]
.. Enter the storage class name if you want to place PVCs on a non-default storage class.
.. Define CRD handling preferences.
+
image:redhat_openshift_image128.jpg[Create ACC instance]
image::redhat_openshift_image128.jpg[Create ACC instance]
+
image:redhat_openshift_image129.jpg[Create ACC instance]
image::redhat_openshift_image129.jpg[Create ACC instance]
10 changes: 5 additions & 5 deletions _include/containers_common_intro_sections.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ For more information, visit the Astra Trident website https://docs.netapp.com/us
[.normal]
NetApp has several storage platforms that are qualified with Astra Trident and Astra Control to provision, protect, and manage data for containerized applications.

image:redhat_openshift_image43.png[]
image::redhat_openshift_image43.png[]

* AFF and FAS systems run NetApp ONTAP and provide storage for both file-based (NFS) and block-based (iSCSI) use cases.
Expand Down Expand Up @@ -92,7 +92,7 @@ For more information about ONTAP, see the https://docs.netapp.com/us-en/ontap/in

NOTE: NetApp ONTAP is available on-premises, virtualized, or in the cloud.

image:redhat_openshift_image35.png[]
image::redhat_openshift_image35.png[]

== NetApp platforms

Expand Down Expand Up @@ -121,7 +121,7 @@ For more information about Cloud Volumes ONTAP, click https://docs.netapp.com/us
[.normal]
NetApp provides a number of products to help you orchestrate, manage, protect, and migrate stateful containerized applications and their data.

image:devops_with_netapp_image1.jpg[]
image::devops_with_netapp_image1.jpg[]

NetApp Astra Control offers a rich set of storage and application-aware data management services for stateful Kubernetes workloads powered by NetApp data protection technology. The Astra Control Service is available to support stateful workloads in cloud-native Kubernetes deployments. The Astra Control Center is available to support stateful workloads in on-premises deployments of Enterprise Kubernetes platforms like {k8s_distribution_name}. For more information visit the NetApp Astra Control website https://cloud.netapp.com/astra[here].

Expand All @@ -138,7 +138,7 @@ The following pages have additional information about the NetApp products that h
[.normal]
NetApp Astra Control Center offers a rich set of storage and application-aware data management services for stateful Kubernetes workloads deployed in an on-premises environment and powered by NetApp data protection technology.

image:redhat_openshift_image44.png[]
image::redhat_openshift_image44.png[]

NetApp Astra Control Center can be installed on a {k8s_distribution_name} cluster that has the Astra Trident storage orchestrator deployed and configured with storage classes and storage backends to NetApp ONTAP storage systems.

Expand All @@ -159,7 +159,7 @@ Astra Trident is an open-source, fully supported storage orchestrator for contai

An administrator can configure a number of storage backends based on project needs and storage system models that enable advanced storage features, including compression, specific disk types, or QoS levels that guarantee a certain level of performance. After they are defined, these backends can be used by developers in their projects to create persistent volume claims (PVCs) and to attach persistent storage to their containers on demand.

image:redhat_openshift_image2.png[]
image::redhat_openshift_image2.png[]

Astra Trident has a rapid development cycle and, like Kubernetes, is released four times a year.

Expand Down
2 changes: 1 addition & 1 deletion _include/gcp-region-support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Supplemental NFS datastore for GCVE is supported with NetApp Cloud Volume Servic
Only CVS-Performance volumes can be used for GCVE NFS Datastore.
For the available location, refer link:https://bluexp.netapp.com/cloud-volumes-global-regions#cvsGc[Global Region Map]

Google Cloud VMware Engine is available at following locations image:gcve_regions_Mar2023.png[]
Google Cloud VMware Engine is available at following locations image::gcve_regions_Mar2023.png[]
To minimize latency, NetApp CVS Volume and GCVE where you intent to mount the volume should be in same availability zone.
Work with Google and NetApp Solution Architects for availability and TCO optimizations.

Expand Down
6 changes: 3 additions & 3 deletions _include/rh-os-n_overview_astra_manual_install_backup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ NOTE: Alternatively, you can create a service account, assign registry-editor an
[netapp-user@rhel7 ~]$ vi push-images-to-registry.sh
for astraImageFile in $(ls images/*.tar); do
astraImage=$(docker load --input ${astraImageFile} | sed 's/Loaded image: //')
astraImage=$(docker load --input ${astraImageFile} | sed 's/Loaded image:: //')
docker tag $astraImage $registry/$(echo $astraImage | sed 's/^[^\/]\+\///')
docker push $registry/$(echo $astraImage | sed 's/^[^\/]\+\///')
done
Expand Down Expand Up @@ -158,7 +158,7 @@ spec:
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: ASTRA_IMAGE_REGISTRY/kube-rbac-proxy:v0.5.0
image:: ASTRA_IMAGE_REGISTRY/kube-rbac-proxy:v0.5.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
Expand All @@ -172,7 +172,7 @@ spec:
env:
- name: ACCOP_LOG_LEVEL
value: "2"
image: astra-registry.apps.ocp-vmw.cie.netapp.com/netapp-astra/acc-operator:21.08.7
image:: astra-registry.apps.ocp-vmw.cie.netapp.com/netapp-astra/acc-operator:21.08.7
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
Expand Down
2 changes: 1 addition & 1 deletion ai/a400-thinksystem-introduction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ This document is intended for the following audiences:

This solution with Lenovo ThinkSystem servers and NetApp ONTAP with AFF storage is designed to handle AI training on large datasets using the processing power of GPUs alongside traditional CPUs. This validation demonstrates high performance and optimal data management with a scale-out architecture that uses either one, two, or four Lenovo SR670 V2 servers alongside a single NetApp AFF A400 storage system. The following figure provides an architectural overview.

image:a400-thinksystem-image2.png[This image depicts an Ethernet switch surrounded by the management server, four SR670 V2s with eight GPUs each and a NetApp ONTAP storage system.]
image::a400-thinksystem-image2.png[This image depicts an Ethernet switch surrounded by the management server, four SR670 V2s with eight GPUs each and a NetApp ONTAP storage system.]

This NetApp and Lenovo solution offers the following key benefits:

Expand Down
8 changes: 4 additions & 4 deletions ai/a400-thinksystem-technology-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ This section introduces the major components of this solution in greater detail.

NetApp AFF storage systems enable businesses to meet enterprise storage requirements with industry-leading performance, superior flexibility, cloud integration, and best-in-class data management. Designed specifically for flash, AFF systems help accelerate, manage, and protect business-critical data.

image:a400-thinksystem-image3.png["This graphic depicts the front of the NetApp AFF A400 storage controller."]
image::a400-thinksystem-image3.png["This graphic depicts the front of the NetApp AFF A400 storage controller."]

image:a400-thinksystem-image4.png["This graphic depicts the back of the NetApp AFF A400 storage controller."]
image::a400-thinksystem-image4.png["This graphic depicts the back of the NetApp AFF A400 storage controller."]

NetApp AFF A400 is a mid-range NVMe flash storage system that includes the following features:

Expand Down Expand Up @@ -89,7 +89,7 @@ A FlexGroup volume (the following figure) is a single namespace made up of multi
* Up to 400 billion files in the same namespace
* Parallelized operations in NAS workloads across CPUs, nodes, aggregates, and constituent FlexVol volumes

image:a400-thinksystem-image5.png["This image depicts an HA-pair of storage controllers containing many volumes with main files within a FlexGroup.]"
image::a400-thinksystem-image5.png["This image depicts an HA-pair of storage controllers containing many volumes with main files within a FlexGroup.]"

== Lenovo ThinkSystem portfolio

Expand All @@ -107,7 +107,7 @@ In the AI area, Lenovo is taking a practical approach to helping enterprises und

The Lenovo ThinkSystem SR670 V2 rack server delivers optimal performance for accelerated AI and high-performance computing (HPC). Supporting up to eight GPUs, the SR670 V2 is suited for the computationally intensive workload requirements of ML, DL, and inference.

image:a400-thinksystem-image6.png["This image depicts three SR670 configurations. The first shows four SXM GPUs with eight 2.5 inch HS drives and 2 PCIe I/O slots. The second shows four double-wide or eight single wide GPU slots and two PCIe I/O slots with eight 2.5-inch or four 3.5-inch HS drives. The third shows eight double-wide GPU slots with six EDSFF HS drives and two PCIe I/O slots."]
image::a400-thinksystem-image6.png["This image depicts three SR670 configurations. The first shows four SXM GPUs with eight 2.5 inch HS drives and 2 PCIe I/O slots. The second shows four double-wide or eight single wide GPU slots and two PCIe I/O slots with eight 2.5-inch or four 3.5-inch HS drives. The third shows eight double-wide GPU slots with six EDSFF HS drives and two PCIe I/O slots."]

With the latest scalable Intel Xeon CPUs that support high-end GPUs (including the NVIDIA A100 80GB PCIe 8x GPU), the ThinkSystem SR670 V2 delivers optimized, accelerated performance for AI and HPC workloads.

Expand Down
2 changes: 1 addition & 1 deletion ai/a400-thinksystem-test-configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ ImageNet is a frequently used image dataset. It contains almost 1.3 million imag

The following figure depicts the network topology of the tested configuration.

image:a400-thinksystem-image7.png["This graphic depicts the compute layer, a Lenovo ThinkSystem SR670 V2, the network layer, a Lenovo Ethernet switch, and the storage layer, a NetApp AFF A400 storage controller. All network connections are included."]
image::a400-thinksystem-image7.png["This graphic depicts the compute layer, a Lenovo ThinkSystem SR670 V2, the network layer, a Lenovo Ethernet switch, and the storage layer, a NetApp AFF A400 storage controller. All network connections are included."]

== Storage controller

Expand Down
4 changes: 2 additions & 2 deletions ai/a400-thinksystem-test-procedure-and-detailed-results.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,9 @@ Both values are similar, demonstrating that the network storage can deliver data

This test simulated the expected use case for this solution: multi-job, multi-user AI training. Each node ran its own training while using the shared network storage. The results are displayed in the following figure, which shows that the solution case provided excellent performance with all jobs running at essentially the same speed as individual jobs. The total throughput scaled linearly with the number of nodes.

image:a400-thinksystem-image8.png[This figure shows the Aggregate Images per second.]
image::a400-thinksystem-image8.png[This figure shows the Aggregate Images per second.]

image:a400-thinksystem-image9.png[This figurwe shows the Runtime in minutes.]
image::a400-thinksystem-image9.png[This figurwe shows the Runtime in minutes.]

These graphs present the runtime in minutes and the aggregate images per second for compute nodes that used eight GPUs from each server on 100 GbE client networking, combining both the concurrent training model and the single training model. The average runtime for the training model was 35 minutes and 9 seconds. The individual runtimes were 34 minutes and 32 seconds, 36 minutes and 21 seconds, 34 minutes and 37 seconds, 35 minutes and 25 seconds, and 34 minutes and 31 seconds. The average images per second for the training model were 22,573, and the individual images per second were 21,764; 23,438; 22,556; 22,564; and 22,547.

Expand Down
6 changes: 3 additions & 3 deletions ai/ai-dgx-superpod.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ summary: This NetApp Verified Architecture describes the design of the NVIDIA DG
:linkattrs:
:imagesdir: ./../media/

image:NVIDIAlogo.png[200,200]
image::NVIDIAlogo.png[200,200]

Amine Bennani, David Arnette and Sathish Thyagarajan, NetApp

Expand All @@ -39,7 +39,7 @@ With NetApp EF600 all-flash arrays at the foundation of an NVIDIA DGX SuperPOD,
** Each DGX SuperPOD scalable unit (SU) consists of 32 DGX H100 systems and is capable of 640 petaFLOPS of AI performance at FP8 precision. It usually contains at least two NetApp BeeGFS building blocks depending on the performance and capacity requirements for a particular installation.

_A high-level view of the solution_
image:EF_SuperPOD_HighLevel.png[]
image::EF_SuperPOD_HighLevel.png[]

* NetApp BeeGFS building blocks consists of two NetApp EF600 arrays and two x86 servers:
** With NetApp EF600 all-flash arrays at the foundation of NVIDIA DGX SuperPOD, customers get a reliable storage foundation backed by six 9s of uptime.
Expand All @@ -48,7 +48,7 @@ image:EF_SuperPOD_HighLevel.png[]
* The combination of NVIDIA SuperPOD SUs and NetApp BeeGFS building blocks provides an agile AI solution in which compute or storage scales easily and seamlessly.

_NetApp BeeGFS building block_
image:EF_SuperPOD_buildingblock.png[]
image::EF_SuperPOD_buildingblock.png[]

=== Use Case Summary

Expand Down
6 changes: 3 additions & 3 deletions ai/ai-edge-introduction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ This document is intended for the following audiences:

This Lenovo ThinkSystem server and NetApp ONTAP or NetApp SANtricity storage solution is designed to handle AI inferencing on large datasets using the processing power of GPUs alongside traditional CPUs. This validation demonstrates high performance and optimal data management with an architecture that uses either single or multiple Lenovo SR350 edge servers interconnected with a single NetApp AFF storage system, as shown in the following two figures.

image:ai-edge-image2.jpg[]
image::ai-edge-image2.jpg[]

image:ai-edge-image17.png[]
image::ai-edge-image17.png[]

The logical architecture overview in the following figure shows the roles of the compute and storage elements in this architecture. Specifically, it shows the following:

Expand All @@ -61,7 +61,7 @@ The logical architecture overview in the following figure shows the roles of the
** Updated models are pushed here.
** Archives input data that edge servers receive for later analysis. For example, if the edge devices are connected to cameras, the storage element keeps the videos captured by the cameras.

image:ai-edge-image3.png[]
image::ai-edge-image3.png[]

|===
| red | blue
Expand Down
8 changes: 4 additions & 4 deletions ai/ai-edge-technology-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ State-of-the-art NetApp AFF storage systems enable AI inference deployments at t
* Entry-level NetApp AFF storage systems are based on FAS2750 hardware and SSD flash media
* Two controllers in HA configuration

image:ai-edge-image5.png[]
image::ai-edge-image5.png[]

NetApp entry-level AFF C190 storage systems support the following features:

Expand Down Expand Up @@ -62,7 +62,7 @@ The EF-Series is a family of entry-level and mid-range all-flash SAN storage arr

The following figure shows the NetApp EF280 storage system.

image:ai-edge-image7.png[]
image::ai-edge-image7.png[]

== NetApp EF280

Expand Down Expand Up @@ -156,9 +156,9 @@ Edge computing allows data from IoT devices to be analyzed at the edge of the ne

Featuring the Intel Xeon D processor with the flexibility to support acceleration for edge AI workloads, the SE350 is purpose-built for addressing the challenge of server deployments in a variety of environments outside the data center.

image:ai-edge-image8.png[]
image::ai-edge-image8.png[]

image:ai-edge-image9.png[]
image::ai-edge-image9.png[]

==== MLPerf

Expand Down
4 changes: 2 additions & 2 deletions ai/ai-edge-test-configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ a|* NetApp ONTAP 9 software
* One interface group per controller, with four logical IP addresses for mount points
|===

image:ai-edge-image10.png[]
image::ai-edge-image10.png[]

The following table lists the storage configuration: AFF C190 with 2RU, 24 drive slots.

Expand Down Expand Up @@ -80,4 +80,4 @@ The following table lists the storage configuration for EF280.
|SE350-2 to iSCSI LUN 1
|===

image:ai-edge-image11.png[]
image::ai-edge-image11.png[]
Loading

0 comments on commit 7a86994

Please sign in to comment.