Skip to content

Commit

Permalink
fixing shortcode
Browse files Browse the repository at this point in the history
  • Loading branch information
IoTechCrafts committed Aug 23, 2024
1 parent c35d57c commit 58c1779
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 12 deletions.
5 changes: 3 additions & 2 deletions content/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ Hope that these guides will make the **Start of your Linux Journey** a better ex
If you found any of the content useful, you are free to [support this project](https://ko-fi.com/jalcocertech).
{{< /callout >}}

{{< dropdown title="📜 License - MIT" closed="true" >}}

{{% details title="📜 License - MIT" closed="true" %}}

I've chosen to utilize the **MIT License** (also known as the Expat License) for the content of this repository.

Expand All @@ -29,4 +30,4 @@ The MIT License is a straightforward and permissive **open-source license** that

As the author, I'm pleased to offer this content of *Linux Made Easy* and the related repository <https://github.com/JAlcocerT/Linux> under the MIT License, inviting you to engage with it, incorporate it into your projects, and explore the possibilities it presents.

{{< /dropdown >}}
{{< /details >}}
18 changes: 10 additions & 8 deletions content/docs/Linux_&_Cloud/llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,17 +82,18 @@ Yes, there are many ways to replace Github Copilot for Free:

### Choosing the Right Model

{{< dropdown title="LLM Quantization" closed="true" >}}

{{% details title="LLM Quantization" closed="true" %}}
* GPTQ quantization, a state-of-the-art method featured in research papers, offers minimal performance loss compared to previous techniques. It's most efficient on NVIDIA GPUs when the model fits entirely in VRAM.
* GGML, a machine learning library by Georgi Gerganov (who also developed llama.cpp for running local LLMs on Mac), performs best on Apple or Intel hardware.

Thanks: https://aituts.com/local-llms/#Which_Quantization

{{< /dropdown >}}
{{< /details >}}

#### Which LLMs are Trending?

{{< dropdown title="You can always check the LLM's Leaderboards" closed="true" >}}
{{% details title="You can always check the LLM's Leaderboards" closed="true" %}}

* <https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard>
* With **ELO** Rating: <https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard>
Expand All @@ -104,7 +105,7 @@ Thanks: https://aituts.com/local-llms/#Which_Quantization
* And [this one](https://www.mosaicml.com/mpt) you can train it and use commercially: https://www.mosaicml.com/training

> You can also check this repository: https://github.com/sindresorhus/awesome-chatgpt and https://github.com/f/awesome-chatgpt-prompts
{{< /dropdown >}}
{{< /details >}}

### Where to host in the Cloud?
Expand Down Expand Up @@ -143,7 +144,8 @@ If you need big GPU power, you can always try https://www.runpod.io/gpu-instance

**Mixed of Experts** is an approach in machine learning where a model consists of numerous sub-models (referred to as "experts"). Each expert specializes in handling different types of data or tasks. The main idea is to route different inputs to the most relevant experts to handle **specific tasks more efficiently** and effectively.

{{< dropdown title="More about MoE LLMs" closed="true" >}}
{{% details title="More about MoE LLMs" closed="true" %}}


For example, some experts might be better at understanding technical jargon, while others might excel at creative writing or conversational language.

Expand All @@ -160,12 +162,12 @@ ollama run solar:10.7b #https://ollama.ai/library/solar/tags
```

You can also run it in Google Colab: https://www.youtube.com/watch?v=ZyFlySElG1U
{{< /dropdown >}}
{{< /details >}}

{{< dropdown title="What it is a RAG?" closed="true" >}}
{{% details title="What it is a RAG" closed="true" %}}

RAG, which stands for "Retrieval-Augmented Generation" is a methodology used in the development of advanced natural language processing (NLP) systems, particularly in the context of large language models (LLMs)

RAG is particularly useful for tasks that require a blend of understanding context, generating coherent responses, and incorporating up-to-date or specific factual information, such as in question-answering systems or chatbots.

{{< /dropdown >}}
{{< /details >}}
4 changes: 2 additions & 2 deletions content/docs/Linux_&_Cloud/selfhosting.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ Thanks to to [Tech-Practice](https://www.youtube.com/watch?v=HPO7fu7Vyw4&t=445s)

## FAQ

{{< dropdown title="Where to Learn More about SelfHosting?" closed="true" >}}
{{% details title="Where to Learn More about SelfHosting" closed="true" %}}

* <https://awweso.me/>
* https://awsmfoss.com/
Expand All @@ -203,7 +203,7 @@ Thanks to to [Tech-Practice](https://www.youtube.com/watch?v=HPO7fu7Vyw4&t=445s)

{{% /details %}}

{{< dropdown title="How to Secure my Services?" closed="true" >}}
{{% details title="How to Secure my Services?" closed="true" %}}

* [NGINX](https://fossengineer.com/selfhosting-nginx-proxy-manager-docker/)
* [Cloudflare](https://fossengineer.com/selfhosting-cloudflared-tunnel-docker/)
Expand Down

0 comments on commit 58c1779

Please sign in to comment.