diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 9ccf18c94..7bd9defe1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -37,8 +37,8 @@ Testing is a crucial aspect of software development, and adherence to these guid - Prefix unit test functions with `Test`. - Use clear and descriptive names. ```go -func TestFunctionName(t *testing.T) { -// Test logic +func TestFunctionName(t *testing.T) { + // Test logic } ``` diff --git a/README.md b/README.md index 31ca648f9..e55163b29 100644 --- a/README.md +++ b/README.md @@ -76,13 +76,13 @@ package main import "gofr.dev/pkg/gofr" func main() { - app := gofr.New() + app := gofr.New() - app.GET("/greet", func(ctx *gofr.Context) (interface{}, error) { - return "Hello World!", nil - }) + app.GET("/greet", func(ctx *gofr.Context) (interface{}, error) { + return "Hello World!", nil + }) - app.Run() // listens and serves on localhost:8000 + app.Run() // listens and serves on localhost:8000 } ``` diff --git a/docs/advanced-guide/circuit-breaker/page.md b/docs/advanced-guide/circuit-breaker/page.md index df766b680..0d06e4c9c 100644 --- a/docs/advanced-guide/circuit-breaker/page.md +++ b/docs/advanced-guide/circuit-breaker/page.md @@ -26,10 +26,10 @@ func main() { app.AddHTTPService("order", "https://order-func", &service.CircuitBreakerConfig{ - // Number of consecutive failed requests after which circuit breaker will be enabled + // Number of consecutive failed requests after which circuit breaker will be enabled Threshold: 4, // Time interval at which circuit breaker will hit the aliveness endpoint. - Interval: 1 * time.Second, + Interval: 1 * time.Second, }, ) diff --git a/docs/advanced-guide/custom-spans-in-tracing/page.md b/docs/advanced-guide/custom-spans-in-tracing/page.md index 45048a554..b6514e74d 100644 --- a/docs/advanced-guide/custom-spans-in-tracing/page.md +++ b/docs/advanced-guide/custom-spans-in-tracing/page.md @@ -20,11 +20,11 @@ and returns a trace.Span. ```go func MyHandler(c context.Context) error { - span := c.Trace("my-custom-span") - defer span.Close() - - // Do some work here - return nil + span := c.Trace("my-custom-span") + defer span.Close() + + // Do some work here + return nil } ``` diff --git a/docs/advanced-guide/gofr-errors/page.md b/docs/advanced-guide/gofr-errors/page.md index 6ecbe504e..76764afa1 100644 --- a/docs/advanced-guide/gofr-errors/page.md +++ b/docs/advanced-guide/gofr-errors/page.md @@ -20,7 +20,7 @@ automatically handle HTTP status code selection. These include: #### Usage: To use the predefined HTTP errors, users can simply call them using GoFr's http package: ```go - err := http.ErrorMissingParam{Param: []string{"id"}} +err := http.ErrorMissingParam{Param: []string{"id"}} ``` ## Database Errors diff --git a/docs/advanced-guide/grpc/page.md b/docs/advanced-guide/grpc/page.md index c813d9359..418b67ff2 100644 --- a/docs/advanced-guide/grpc/page.md +++ b/docs/advanced-guide/grpc/page.md @@ -9,32 +9,33 @@ GoFr enables you to create gRPC handlers efficiently while leveraging GoFr's con **1. Protocol Buffer Compiler (`protoc`) Installation:** - **Linux (using `apt` or `apt-get`):** + ```bash - sudo apt install -y protobuf-compiler - protoc --version # Ensure compiler version is 3+ - ``` +sudo apt install -y protobuf-compiler +protoc --version # Ensure compiler version is 3+ +``` - **macOS (using Homebrew):** ```bash - brew install protobuf - protoc --version # Ensure compiler version is 3+ +brew install protobuf +protoc --version # Ensure compiler version is 3+ ``` **2. Go Plugins for Protocol Compiler:** a. Install protocol compiler plugins for Go: - ```bash - go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28 - go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2 - ``` +```bash +go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28 +go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2 +``` b. Update `PATH` for `protoc` to locate the plugins: - ```bash - export PATH="$PATH:$(go env GOPATH)/bin" - ``` +```bash +export PATH="$PATH:$(go env GOPATH)/bin" +``` ## Creating Protocol Buffers @@ -44,16 +45,16 @@ For a detailed guide, refer to the official gRPC documentation's tutorial: {% ne Create a `.proto` file (e.g., `customer.proto`) to define your service and the RPC methods it provides: - ```protobuf - // Indicates the protocol buffer version that is being used - syntax = "proto3"; - // Indicates the go package where the generated file will be produced - option go_package = "path/to/your/proto/file"; +```protobuf +// Indicates the protocol buffer version that is being used +syntax = "proto3"; +// Indicates the go package where the generated file will be produced +option go_package = "path/to/your/proto/file"; - service {serviceName}Service { - rpc {serviceMethod} ({serviceRequest}) returns ({serviceResponse}) {} - } - ``` +service {serviceName}Service { + rpc {serviceMethod} ({serviceRequest}) returns ({serviceResponse}) {} +} +``` **2. Specify Request and Response Types:** @@ -79,30 +80,32 @@ string address = 3; Run the following command to generate Go code using the Go gRPC plugins: - ```bash - protoc \ - --go_out=. \ - --go_opt=paths=source_relative \ - --go-grpc_out=. \ - --go-grpc_opt=paths=source_relative \ - {serviceName}.proto - ``` +```bash +protoc \ + --go_out=. \ + --go_opt=paths=source_relative \ + --go-grpc_out=. \ + --go-grpc_opt=paths=source_relative \ + {serviceName}.proto +``` This command generates two files, `{serviceName}.pb.go` and `{serviceName}_grpc.pb.go`, containing the necessary code for performing RPC calls. -## Generating gRPC Handler Template using `gofr wrap grpc` +## Generating gRPC Handler Template using `gofr wrap grpc` #### Prerequisite: gofr-cli must be installed To install the CLI - + ```bash - go install gofr.dev/cli/gofr@latest +go install gofr.dev/cli/gofr@latest ``` **1. Use the `gofr wrap grpc` Command:** - ```bash - gofr wrap grpc -proto=./path/your/proto/file - ``` + +```bash +gofr wrap grpc -proto=./path/your/proto/file +``` This command leverages the `gofr-cli` to generate a `{serviceName}_server.go` file (e.g., `CustomerServer.go`) containing a template for your gRPC server implementation, including context support, in the same directory as @@ -119,24 +122,26 @@ that of the specified proto file. **1. Import Necessary Packages:** - ```go - import ( - "gofr.dev/pkg/gofr" - "path/to/your/generated-grpc-server/packageName" - ) - ``` +```go +import ( + "path/to/your/generated-grpc-server/packageName" + + "gofr.dev/pkg/gofr" +) +``` **2. Register the Service in your `main.go`:** - ```go - func main() { - app := gofr.New() +```go +func main() { + app := gofr.New() + + packageName.Register{serviceName}ServerWithGofr(app, &packageName.{serviceName}GoFrServer{}) - packageName.Register{serviceName}ServerWithGofr(app, &packageName.{serviceName}GoFrServer{}) + app.Run() +} +``` - app.Run() - } - ``` >Note: By default, gRPC server will run on port 9000, to customize the port users can set `GRPC_PORT` config in the .env > ##### Check out the example of setting up a gRPC server in GoFr: [Visit GitHub](https://github.com/gofr-dev/gofr/blob/main/examples/grpc-server/main.go) diff --git a/docs/advanced-guide/handling-data-migrations/page.md b/docs/advanced-guide/handling-data-migrations/page.md index 44a0bca88..919378c68 100644 --- a/docs/advanced-guide/handling-data-migrations/page.md +++ b/docs/advanced-guide/handling-data-migrations/page.md @@ -23,7 +23,6 @@ package migrations import "gofr.dev/pkg/gofr/migration" - const createTable = `CREATE TABLE IF NOT EXISTS employee ( id int not null @@ -89,7 +88,6 @@ func main() { // Run the application a.Run() } - ``` When we run the app we will see the following logs for migrations which ran successfully. @@ -171,7 +169,7 @@ When using batch operations, consider using a `LoggedBatch` for atomicity or an package migrations import ( - "gofr.dev/pkg/gofr/migration" + "gofr.dev/pkg/gofr/migration" ) const ( @@ -181,52 +179,52 @@ const ( gender text, number text );` - + addCassandraRecords = `BEGIN BATCH INSERT INTO employee (id, name, gender, number) VALUES (1, 'Alison', 'F', '1234567980'); INSERT INTO employee (id, name, gender, number) VALUES (2, 'Alice', 'F', '9876543210'); APPLY BATCH; ` - + employeeDataCassandra = `INSERT INTO employee (id, name, gender, number) VALUES (?, ?, ?, ?);` ) func createTableEmployeeCassandra() migration.Migrate { - return migration.Migrate{ - UP: func(d migration.Datasource) error { - // Execute the create table statement - if err := d.Cassandra.Exec(createTableCassandra); err != nil { - return err - } - - // Batch processes can also be executed in Exec as follows: + return migration.Migrate{ + UP: func(d migration.Datasource) error { + // Execute the create table statement + if err := d.Cassandra.Exec(createTableCassandra); err != nil { + return err + } + + // Batch processes can also be executed in Exec as follows: if err := d.Cassandra.Exec(addCassandraRecords); err != nil { return err - } - - // Create a new batch operation - batchName := "employeeBatch" - if err := d.Cassandra.NewBatch(batchName, 0); err != nil { // 0 for LoggedBatch - return err - } - - // Add multiple queries to the batch - if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 1, "Harry", "M", "1234567980"); err != nil { - return err - } - - if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 2, "John", "M", "9876543210"); err != nil { - return err - } - - // Execute the batch operation - if err := d.Cassandra.ExecuteBatch(batchName); err != nil { - return err - } - - return nil - }, - } + } + + // Create a new batch operation + batchName := "employeeBatch" + if err := d.Cassandra.NewBatch(batchName, 0); err != nil { // 0 for LoggedBatch + return err + } + + // Add multiple queries to the batch + if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 1, "Harry", "M", "1234567980"); err != nil { + return err + } + + if err := d.Cassandra.BatchQuery(batchName, employeeDataCassandra, 2, "John", "M", "9876543210"); err != nil { + return err + } + + // Execute the batch operation + if err := d.Cassandra.ExecuteBatch(batchName); err != nil { + return err + } + + return nil + }, + } } ``` diff --git a/docs/advanced-guide/handling-file/page.md b/docs/advanced-guide/handling-file/page.md index 49a7f1bf8..8103fa53f 100644 --- a/docs/advanced-guide/handling-file/page.md +++ b/docs/advanced-guide/handling-file/page.md @@ -13,13 +13,13 @@ GoFr also supports FTP/SFTP file-store. Developers can also connect and use thei package main import ( - "gofr.dev/pkg/gofr" + "gofr.dev/pkg/gofr" - "gofr.dev/pkg/gofr/datasource/file/ftp" + "gofr.dev/pkg/gofr/datasource/file/ftp" ) func main() { - app := gofr.New() + app := gofr.New() app.AddFileStore(ftp.New(&ftp.Config{ Host: "127.0.0.1", @@ -28,8 +28,8 @@ func main() { Port: 21, RemoteDir: "/ftp/user", })) - - app.Run() + + app.Run() } ``` @@ -38,22 +38,22 @@ func main() { package main import ( - "gofr.dev/pkg/gofr" + "gofr.dev/pkg/gofr" - "gofr.dev/pkg/gofr/datasource/file/sftp" + "gofr.dev/pkg/gofr/datasource/file/sftp" ) func main() { - app := gofr.New() + app := gofr.New() app.AddFileStore(sftp.New(&sftp.Config{ - Host: "127.0.0.1", - User: "user", - Password: "password", - Port: 22, + Host: "127.0.0.1", + User: "user", + Password: "password", + Port: 22, })) - - app.Run() + + app.Run() } ``` @@ -67,16 +67,14 @@ To run S3 File-Store locally we can use localstack, package main import ( - "gofr.dev/pkg/gofr" + "gofr.dev/pkg/gofr" - "gofr.dev/pkg/gofr/datasource/file/s3" + "gofr.dev/pkg/gofr/datasource/file/s3" ) func main() { - app := gofr.New() - - - + app := gofr.New() + // Note that currently we do not handle connections through session token. // BaseEndpoint is not necessary while connecting to AWS as it automatically resolves it on the basis of region. // However, in case we are using any other AWS compatible service, such like running or testing locally, then this needs to be set. @@ -88,8 +86,8 @@ func main() { AccessKeyID: app.Config.Get("AWS_ACCESS_KEY_ID"), SecretAccessKey: app.Config.Get("AWS_SECRET_ACCESS_KEY"), })) - - app.Run() + + app.Run() } ``` > Note: The current implementation supports handling only one bucket at a time, @@ -202,7 +200,6 @@ _, err = csvFile.WriteAt([]byte("test content"), 4) if err != nil { return nil, err } - ``` ### Getting Information of the file/directory @@ -217,7 +214,6 @@ if entry.IsDir() { } fmt.Printf("%v: %v Size: %v Last Modified Time : %v\n" entryType, entry.Name(), entry.Size(), entry.ModTime()) - ``` >Note: In S3: > - Names without a file extension are treated as directories by default. diff --git a/docs/advanced-guide/http-authentication/page.md b/docs/advanced-guide/http-authentication/page.md index 765971ac6..4bbc57f90 100644 --- a/docs/advanced-guide/http-authentication/page.md +++ b/docs/advanced-guide/http-authentication/page.md @@ -25,11 +25,11 @@ Use `EnableBasicAuth(username, password)` to configure GoFr with pre-defined cre ```go func main() { app := gofr.New() - + app.EnableBasicAuth("admin", "secret_password") // Replace with your credentials - + app.GET("/protected-resource", func(c *gofr.Context) (interface{}, error) { - // Handle protected resource access + // Handle protected resource access return nil, nil }) @@ -44,18 +44,18 @@ The `validationFunc` takes the username and password as arguments and returns tr ```go func validateUser(c *container.Container, username, password string) bool { - // Implement your credential validation logic here - // This example uses hardcoded credentials for illustration only - return username == "john" && password == "doe123" -} + // Implement your credential validation logic here + // This example uses hardcoded credentials for illustration only + return username == "john" && password == "doe123" +} -func main() { - app := gofr.New() +func main() { + app := gofr.New() - app.EnableBasicAuthWithValidator(validateUser) + app.EnableBasicAuthWithValidator(validateUser) - app.GET("/secure-data", func(c *gofr.Context) (interface{}, error) { - // Handle access to secure data + app.GET("/secure-data", func(c *gofr.Context) (interface{}, error) { + // Handle access to secure data return nil, nil }) @@ -146,10 +146,10 @@ Use `EnableOAuth(jwks-endpoint,refresh_interval)` to configure GoFr with pre-def func main() { app := gofr.New() - app.EnableOAuth("http://jwks-endpoint", 20) - + app.EnableOAuth("http://jwks-endpoint", 20) + app.GET("/protected-resource", func(c *gofr.Context) (interface{}, error) { - // Handle protected resource access + // Handle protected resource access return nil, nil }) diff --git a/docs/advanced-guide/http-communication/page.md b/docs/advanced-guide/http-communication/page.md index a5d324610..144bf6722 100644 --- a/docs/advanced-guide/http-communication/page.md +++ b/docs/advanced-guide/http-communication/page.md @@ -27,30 +27,30 @@ i.e. the order of the options is not important. > Service names are to be kept unique to one service. ```go -app.AddHTTPService( , ) -``` +app.AddHTTPService( , ) +``` #### Example ```go -package main - -import ( - "gofr.dev/pkg/gofr" -) - -func main() { - // Create a new application - app := gofr.New() - - // register a payment service which is hosted at http://localhost:9000 - app.AddHTTPService("payment", "http://localhost:9000") - - app.GET("/customer", Customer) - - // Run the application - app.Run() -} -``` +package main + +import ( + "gofr.dev/pkg/gofr" +) + +func main() { + // Create a new application + app := gofr.New() + + // register a payment service which is hosted at http://localhost:9000 + app.AddHTTPService("payment", "http://localhost:9000") + + app.GET("/customer", Customer) + + // Run the application + app.Run() +} +``` ### Accessing HTTP Service in handler @@ -59,29 +59,29 @@ Using the `GetHTTPService` method with the service name that was given at the ti the client can be retrieved as shown below: ```go -svc := ctx.GetHTTPService() -``` +svc := ctx.GetHTTPService() +``` ```go -func Customer(ctx *gofr.Context) (interface{}, error) { - // Get the payment service client - paymentSvc := ctx.GetHTTPService("payment") - - // Use the Get method to call the GET /user endpoint of payments service - resp, err := paymentSvc.Get(ctx, "user", nil) - if err != nil { - return nil, err - } - - defer resp.Body.Close() - - body, err := io.ReadAll(resp.Body) - if err != nil { - return nil, err - } - - return string(body), nil -} +func Customer(ctx *gofr.Context) (interface{}, error) { + // Get the payment service client + paymentSvc := ctx.GetHTTPService("payment") + + // Use the Get method to call the GET /user endpoint of payments service + resp, err := paymentSvc.Get(ctx, "user", nil) + if err != nil { + return nil, err + } + + defer resp.Body.Close() + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, err + } + + return string(body), nil +} ``` ### Additional Configurational Options diff --git a/docs/advanced-guide/injecting-databases-drivers/page.md b/docs/advanced-guide/injecting-databases-drivers/page.md index 5fee77e44..ce8b2d9a3 100644 --- a/docs/advanced-guide/injecting-databases-drivers/page.md +++ b/docs/advanced-guide/injecting-databases-drivers/page.md @@ -12,9 +12,9 @@ GoFr supports injecting ClickHouse that supports the following interface. Any dr using `app.AddClickhouse()` method, and user's can use ClickHouse across application with `gofr.Context`. ```go type Clickhouse interface { - Exec(ctx context.Context, query string, args ...any) error - Select(ctx context.Context, dest any, query string, args ...any) error - AsyncInsert(ctx context.Context, query string, wait bool, args ...any) error + Exec(ctx context.Context, query string, args ...any) error + Select(ctx context.Context, dest any, query string, args ...any) error + AsyncInsert(ctx context.Context, query string, wait bool, args ...any) error } ``` @@ -32,51 +32,51 @@ go get gofr.dev/pkg/gofr/datasource/clickhouse@latest package main import ( - "gofr.dev/pkg/gofr" + "gofr.dev/pkg/gofr" - "gofr.dev/pkg/gofr/datasource/clickhouse" + "gofr.dev/pkg/gofr/datasource/clickhouse" ) type User struct { - Id string `ch:"id"` - Name string `ch:"name"` - Age string `ch:"age"` + Id string `ch:"id"` + Name string `ch:"name"` + Age string `ch:"age"` } func main() { - app := gofr.New() + app := gofr.New() + + app.AddClickhouse(clickhouse.New(clickhouse.Config{ + Hosts: "localhost:9001", + Username: "root", + Password: "password", + Database: "users", + })) - app.AddClickhouse(clickhouse.New(clickhouse.Config{ - Hosts: "localhost:9001", - Username: "root", - Password: "password", - Database: "users", - })) - - app.POST("/user", Post) - app.GET("/user", Get) - - app.Run() + app.POST("/user", Post) + app.GET("/user", Get) + + app.Run() } func Post(ctx *gofr.Context) (interface{}, error) { - err := ctx.Clickhouse.Exec(ctx, "INSERT INTO users (id, name, age) VALUES (?, ?, ?)", "8f165e2d-feef-416c-95f6-913ce3172e15", "aryan", "10") - if err != nil { - return nil, err - } + err := ctx.Clickhouse.Exec(ctx, "INSERT INTO users (id, name, age) VALUES (?, ?, ?)", "8f165e2d-feef-416c-95f6-913ce3172e15", "aryan", "10") + if err != nil { + return nil, err + } - return "successful inserted", nil + return "successful inserted", nil } func Get(ctx *gofr.Context) (interface{}, error) { - var user []User + var user []User - err := ctx.Clickhouse.Select(ctx, &user, "SELECT * FROM users") - if err != nil { - return nil, err - } + err := ctx.Clickhouse.Select(ctx, &user, "SELECT * FROM users") + if err != nil { + return nil, err + } - return user, nil + return user, nil } ``` @@ -86,25 +86,25 @@ using `app.AddMongo()` method, and user's can use MongoDB across application wit ```go type Mongo interface { Find(ctx context.Context, collection string, filter interface{}, results interface{}) error - + FindOne(ctx context.Context, collection string, filter interface{}, result interface{}) error - + InsertOne(ctx context.Context, collection string, document interface{}) (interface{}, error) - + InsertMany(ctx context.Context, collection string, documents []interface{}) ([]interface{}, error) - + DeleteOne(ctx context.Context, collection string, filter interface{}) (int64, error) - + DeleteMany(ctx context.Context, collection string, filter interface{}) (int64, error) - + UpdateByID(ctx context.Context, collection string, id interface{}, update interface{}) (int64, error) - + UpdateOne(ctx context.Context, collection string, filter interface{}, update interface{}) error - + UpdateMany(ctx context.Context, collection string, filter interface{}, update interface{}) (int64, error) - + CountDocuments(ctx context.Context, collection string, filter interface{}) (int64, error) - + Drop(ctx context.Context, collection string) error } ``` @@ -123,10 +123,10 @@ go get gofr.dev/pkg/gofr/datasource/mongo@latest package main import ( - "gofr.dev/pkg/gofr/datasource/mongo" - "go.mongodb.org/mongo-driver/bson" - - "gofr.dev/pkg/gofr" + "go.mongodb.org/mongo-driver/bson" + "gofr.dev/pkg/gofr/datasource/mongo" + + "gofr.dev/pkg/gofr" ) type Person struct { @@ -137,9 +137,9 @@ type Person struct { func main() { app := gofr.New() - - db := mongo.New(mongo.Config{URI: "mongodb://localhost:27017", Database: "test",ConnectionTimeout: 4*time.Second}) - + + db := mongo.New(mongo.Config{URI: "mongodb://localhost:27017", Database: "test", ConnectionTimeout: 4 * time.Second}) + // inject the mongo into gofr to use mongoDB across the application // using gofr context app.AddMongo(db) @@ -228,11 +228,11 @@ import ( ) type Person struct { - ID int `json:"id,omitempty"` - Name string `json:"name"` - Age int `json:"age"` - // db tag specifies the actual column name in the database - State string `json:"state" db:"location"` + ID int `json:"id,omitempty"` + Name string `json:"name"` + Age int `json:"age"` + // db tag specifies the actual column name in the database + State string `json:"state" db:"location"` } func main() { @@ -258,7 +258,7 @@ func main() { return nil, err } - err = c.Cassandra.ExecWithCtx(c,`INSERT INTO persons(id, name, age, location) VALUES(?, ?, ?, ?)`, + err = c.Cassandra.ExecWithCtx(c, `INSERT INTO persons(id, name, age, location) VALUES(?, ?, ?, ?)`, person.ID, person.Name, person.Age, person.State) if err != nil { return nil, err @@ -406,7 +406,6 @@ func DGraphQueryHandler(c *gofr.Context) (interface{}, error) { return result, nil } - ``` @@ -418,16 +417,16 @@ using `app.AddSolr()` method, and user's can use Solr DB across application with ```go type Solr interface { - Search(ctx context.Context, collection string, params map[string]any) (any, error) - Create(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) - Update(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) - Delete(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) - - Retrieve(ctx context.Context, collection string, params map[string]any) (any, error) - ListFields(ctx context.Context, collection string, params map[string]any) (any, error) - AddField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) - UpdateField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) - DeleteField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) + Search(ctx context.Context, collection string, params map[string]any) (any, error) + Create(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) + Update(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) + Delete(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) + + Retrieve(ctx context.Context, collection string, params map[string]any) (any, error) + ListFields(ctx context.Context, collection string, params map[string]any) (any, error) + AddField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) + UpdateField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) + DeleteField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) } ``` @@ -514,83 +513,83 @@ enabling applications to leverage OpenTSDB for time-series data management throu ```go // OpenTSDB provides methods for GoFr applications to communicate with OpenTSDB -// through its REST APIs. +// through its REST APIs. type OpenTSDB interface { + // HealthChecker verifies if the OpenTSDB server is reachable. + // Returns an error if the server is unreachable, otherwise nil. + HealthChecker -// HealthChecker verifies if the OpenTSDB server is reachable. -// Returns an error if the server is unreachable, otherwise nil. -HealthChecker - -// PutDataPoints sends data to store metrics in OpenTSDB. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - data: A slice of DataPoint objects; must contain at least one entry. -// - queryParam: Specifies the response format: -// - client.PutRespWithSummary: Requests a summary response. -// - client.PutRespWithDetails: Requests detailed response information. -// - Empty string (""): No additional response details. -// - res: A pointer to PutResponse, where the server's response will be stored. -// -// Returns: -// - Error if parameters are invalid, response parsing fails, or if connectivity issues occur. -PutDataPoints(ctx context.Context, data any, queryParam string, res any) error - -// QueryDataPoints retrieves data based on the specified parameters. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - param: An instance of QueryParam with query parameters for filtering data. -// - res: A pointer to QueryResponse, where the server's response will be stored. -QueryDataPoints(ctx context.Context, param any, res any) error - -// QueryLatestDataPoints fetches the latest data point(s). -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - param: An instance of QueryLastParam with query parameters for the latest data point. -// - res: A pointer to QueryLastResponse, where the server's response will be stored. -QueryLatestDataPoints(ctx context.Context, param any, res any) error - -// GetAggregators retrieves available aggregation functions. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - res: A pointer to AggregatorsResponse, where the server's response will be stored. -GetAggregators(ctx context.Context, res any) error - -// QueryAnnotation retrieves a single annotation. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - queryAnnoParam: A map of parameters for the annotation query, such as client.AnQueryStartTime, client.AnQueryTSUid. -// - res: A pointer to AnnotationResponse, where the server's response will be stored. -QueryAnnotation(ctx context.Context, queryAnnoParam map[string]any, res any) error - -// PostAnnotation creates or updates an annotation. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - annotation: The annotation to be created or updated. -// - res: A pointer to AnnotationResponse, where the server's response will be stored. -PostAnnotation(ctx context.Context, annotation any, res any) error - -// PutAnnotation creates or replaces an annotation. -// Fields not included in the request will be reset to default values. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - annotation: The annotation to be created or replaced. -// - res: A pointer to AnnotationResponse, where the server's response will be stored. -PutAnnotation(ctx context.Context, annotation any, res any) error - -// DeleteAnnotation removes an annotation. -// -// Parameters: -// - ctx: Context for managing request lifetime. -// - annotation: The annotation to be deleted. -// - res: A pointer to AnnotationResponse, where the server's response will be stored. -DeleteAnnotation(ctx context.Context, annotation any, res any) error + // PutDataPoints sends data to store metrics in OpenTSDB. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - data: A slice of DataPoint objects; must contain at least one entry. + // - queryParam: Specifies the response format: + // - client.PutRespWithSummary: Requests a summary response. + // - client.PutRespWithDetails: Requests detailed response information. + // - Empty string (""): No additional response details. + // + // - res: A pointer to PutResponse, where the server's response will be stored. + // + // Returns: + // - Error if parameters are invalid, response parsing fails, or if connectivity issues occur. + PutDataPoints(ctx context.Context, data any, queryParam string, res any) error + + // QueryDataPoints retrieves data based on the specified parameters. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - param: An instance of QueryParam with query parameters for filtering data. + // - res: A pointer to QueryResponse, where the server's response will be stored. + QueryDataPoints(ctx context.Context, param any, res any) error + + // QueryLatestDataPoints fetches the latest data point(s). + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - param: An instance of QueryLastParam with query parameters for the latest data point. + // - res: A pointer to QueryLastResponse, where the server's response will be stored. + QueryLatestDataPoints(ctx context.Context, param any, res any) error + + // GetAggregators retrieves available aggregation functions. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - res: A pointer to AggregatorsResponse, where the server's response will be stored. + GetAggregators(ctx context.Context, res any) error + + // QueryAnnotation retrieves a single annotation. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - queryAnnoParam: A map of parameters for the annotation query, such as client.AnQueryStartTime, client.AnQueryTSUid. + // - res: A pointer to AnnotationResponse, where the server's response will be stored. + QueryAnnotation(ctx context.Context, queryAnnoParam map[string]any, res any) error + + // PostAnnotation creates or updates an annotation. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - annotation: The annotation to be created or updated. + // - res: A pointer to AnnotationResponse, where the server's response will be stored. + PostAnnotation(ctx context.Context, annotation any, res any) error + + // PutAnnotation creates or replaces an annotation. + // Fields not included in the request will be reset to default values. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - annotation: The annotation to be created or replaced. + // - res: A pointer to AnnotationResponse, where the server's response will be stored. + PutAnnotation(ctx context.Context, annotation any, res any) error + + // DeleteAnnotation removes an annotation. + // + // Parameters: + // - ctx: Context for managing request lifetime. + // - annotation: The annotation to be deleted. + // - res: A pointer to AnnotationResponse, where the server's response will be stored. + DeleteAnnotation(ctx context.Context, annotation any, res any) error } ``` @@ -724,37 +723,36 @@ with ScyllaDB. Any driver implementation that adheres to this interface can be i ```go type ScyllaDB interface { -// Query executes a CQL (Cassandra Query Language) query on the ScyllaDB cluster -// and stores the result in the provided destination variable `dest`. -// Accepts pointer to struct or slice as dest parameter for single and multiple -Query(dest any, stmt string, values ...any) error -// QueryWithCtx executes the query with a context and binds the result into dest parameter. -// Accepts pointer to struct or slice as dest parameter for single and multiple rows retrieval respectively. -QueryWithCtx(ctx context.Context, dest any, stmt string, values ...any) error -// Exec executes a CQL statement (e.g., INSERT, UPDATE, DELETE) on the ScyllaDB cluster without returning any result. -Exec(stmt string, values ...any) error -// ExecWithCtx executes a CQL statement with the provided context and without returning any result. -ExecWithCtx(ctx context.Context, stmt string, values ...any) error -// ExecCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). -// If the transaction fails because the existing values did not match, the previous values will be stored in dest. -// Returns true if the query is applied otherwise false. -// Returns false and error if any error occur while executing the query. -// Accepts only pointer to struct and built-in types as the dest parameter. -ExecCAS(dest any, stmt string, values ...any) (bool, error) -// NewBatch initializes a new batch operation with the specified name and batch type. -NewBatch(name string, batchType int) error -// NewBatchWithCtx takes context,name and batchtype and return error. -NewBatchWithCtx(_ context.Context, name string, batchType int) error -// BatchQuery executes a batch query in the ScyllaDB cluster with the specified name, statement, and values. -BatchQuery(name, stmt string, values ...any) error -// BatchQueryWithCtx executes a batch query with the provided context. -BatchQueryWithCtx(ctx context.Context, name, stmt string, values ...any) error -// ExecuteBatchWithCtx executes a batch with context and name returns error. -ExecuteBatchWithCtx(ctx context.Context, name string) error -// HealthChecker defines the HealthChecker interface. -HealthChecker + // Query executes a CQL (Cassandra Query Language) query on the ScyllaDB cluster + // and stores the result in the provided destination variable `dest`. + // Accepts pointer to struct or slice as dest parameter for single and multiple + Query(dest any, stmt string, values ...any) error + // QueryWithCtx executes the query with a context and binds the result into dest parameter. + // Accepts pointer to struct or slice as dest parameter for single and multiple rows retrieval respectively. + QueryWithCtx(ctx context.Context, dest any, stmt string, values ...any) error + // Exec executes a CQL statement (e.g., INSERT, UPDATE, DELETE) on the ScyllaDB cluster without returning any result. + Exec(stmt string, values ...any) error + // ExecWithCtx executes a CQL statement with the provided context and without returning any result. + ExecWithCtx(ctx context.Context, stmt string, values ...any) error + // ExecCAS executes a lightweight transaction (i.e. an UPDATE or INSERT statement containing an IF clause). + // If the transaction fails because the existing values did not match, the previous values will be stored in dest. + // Returns true if the query is applied otherwise false. + // Returns false and error if any error occur while executing the query. + // Accepts only pointer to struct and built-in types as the dest parameter. + ExecCAS(dest any, stmt string, values ...any) (bool, error) + // NewBatch initializes a new batch operation with the specified name and batch type. + NewBatch(name string, batchType int) error + // NewBatchWithCtx takes context,name and batchtype and return error. + NewBatchWithCtx(_ context.Context, name string, batchType int) error + // BatchQuery executes a batch query in the ScyllaDB cluster with the specified name, statement, and values. + BatchQuery(name, stmt string, values ...any) error + // BatchQueryWithCtx executes a batch query with the provided context. + BatchQueryWithCtx(ctx context.Context, name, stmt string, values ...any) error + // ExecuteBatchWithCtx executes a batch with context and name returns error. + ExecuteBatchWithCtx(ctx context.Context, name string) error + // HealthChecker defines the HealthChecker interface. + HealthChecker } - ``` @@ -798,7 +796,6 @@ func main() { app.POST("/users", addUser) app.Run() - } func addUser(c *gofr.Context) (interface{}, error) { @@ -810,7 +807,6 @@ func addUser(c *gofr.Context) (interface{}, error) { _ = c.ScyllaDB.ExecWithCtx(c, `INSERT INTO users (user_id, username, email) VALUES (?, ?, ?)`, newUser.ID, newUser.Name, newUser.Email) return newUser, nil - } func getUser(c *gofr.Context) (interface{}, error) { @@ -831,6 +827,4 @@ func getUser(c *gofr.Context) (interface{}, error) { return user, nil } - - -``` \ No newline at end of file +``` diff --git a/docs/advanced-guide/key-value-store/page.md b/docs/advanced-guide/key-value-store/page.md index b9ff0355d..ac98e6fb8 100644 --- a/docs/advanced-guide/key-value-store/page.md +++ b/docs/advanced-guide/key-value-store/page.md @@ -10,9 +10,9 @@ the framework itself. GoFr provide the following functionalities for its key-val ```go type KVStore interface { - Get(ctx context.Context, key string) (string, error) - Set(ctx context.Context, key, value string) error - Delete(ctx context.Context, key string) error + Get(ctx context.Context, key string) (string, error) + Set(ctx context.Context, key, value string) error + Delete(ctx context.Context, key string) error } ``` diff --git a/docs/advanced-guide/middlewares/page.md b/docs/advanced-guide/middlewares/page.md index 4ddcf023f..7bce611ef 100644 --- a/docs/advanced-guide/middlewares/page.md +++ b/docs/advanced-guide/middlewares/page.md @@ -34,36 +34,36 @@ The UseMiddleware method is ideal for simple middleware that doesn't need direct ```go import ( - "net/http" + "net/http" - gofrHTTP "gofr.dev/pkg/gofr/http" + gofrHTTP "gofr.dev/pkg/gofr/http" ) // Define your custom middleware function func customMiddleware() gofrHTTP.Middleware { - return func(inner http.Handler) http.Handler { - return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - // Your custom logic here - // For example, logging, authentication, etc. - - // Call the next handler in the chain - inner.ServeHTTP(w, r) - }) - } + return func(inner http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Your custom logic here + // For example, logging, authentication, etc. + + // Call the next handler in the chain + inner.ServeHTTP(w, r) + }) + } } func main() { - // Create a new instance of your GoFr application - app := gofr.New() + // Create a new instance of your GoFr application + app := gofr.New() - // Add your custom middleware to the application - app.UseMiddleware(customMiddleware()) + // Add your custom middleware to the application + app.UseMiddleware(customMiddleware()) - // Define your application routes and handlers - // ... + // Define your application routes and handlers + // ... - // Run your GoFr application - app.Run() + // Run your GoFr application + app.Run() } ``` diff --git a/docs/advanced-guide/overriding-default/page.md b/docs/advanced-guide/overriding-default/page.md index 3f5cc9f60..dd72cc765 100644 --- a/docs/advanced-guide/overriding-default/page.md +++ b/docs/advanced-guide/overriding-default/page.md @@ -14,21 +14,20 @@ package main import "gofr.dev/pkg/gofr" type user struct { - ID int `json:"id"` - Name string `json:"name"` + ID int `json:"id"` + Name string `json:"name"` } func main() { - app := gofr.New() + app := gofr.New() - app.GET("/users", func(ctx *gofr.Context) (interface{}, error) { + app.GET("/users", func(ctx *gofr.Context) (interface{}, error) { + users := []user{{ID: 1, Name: "Daria"}, {ID: 2, Name: "Ihor"}} - users := []user{{ID: 1, Name: "Daria"}, {ID: 2, Name: "Ihor"}} - - return users, nil - }) + return users, nil + }) - app.Run() + app.Run() } ``` diff --git a/docs/advanced-guide/serving-static-files/page.md b/docs/advanced-guide/serving-static-files/page.md index e1d856bc5..1c64e32d8 100644 --- a/docs/advanced-guide/serving-static-files/page.md +++ b/docs/advanced-guide/serving-static-files/page.md @@ -28,9 +28,9 @@ package main import "gofr.dev/pkg/gofr" -func main(){ - app := gofr.New() - app.Run() +func main() { + app := gofr.New() + app.Run() } ``` @@ -65,10 +65,10 @@ package main import "gofr.dev/pkg/gofr" -func main(){ - app := gofr.New() - app.AddStaticFiles("public", "./public") - app.Run() +func main() { + app := gofr.New() + app.AddStaticFiles("public", "./public") + app.Run() } ``` diff --git a/docs/advanced-guide/setting-custom-response-headers/page.md b/docs/advanced-guide/setting-custom-response-headers/page.md index e32b3bbcb..c058b98e3 100644 --- a/docs/advanced-guide/setting-custom-response-headers/page.md +++ b/docs/advanced-guide/setting-custom-response-headers/page.md @@ -20,6 +20,7 @@ GoFr simplifies the process of adding custom HTTP response headers and metadata - Keys must be strings, and values can be of any type. When metadata is included, the response structure is: + ```json { "data": {}, @@ -44,45 +45,45 @@ To include custom headers and metadata in your response, populate the Headers an package main import ( - "time" + "time" - "gofr.dev/pkg/gofr" - "gofr.dev/pkg/gofr/http/response" + "gofr.dev/pkg/gofr" + "gofr.dev/pkg/gofr/http/response" ) func main() { - app := gofr.New() + app := gofr.New() - app.GET("/hello", HelloHandler) + app.GET("/hello", HelloHandler) - app.Run() + app.Run() } func HelloHandler(c *gofr.Context) (interface{}, error) { - name := c.Param("name") - if name == "" { - c.Log("Name parameter is empty, defaulting to 'World'") - name = "World" - } - - // Define custom headers (map[string]string) - headers := map[string]string{ - "X-Custom-Header": "CustomValue", - "X-Another-Header": "AnotherValue", - } - - // Define metadata (map[string]any) - metaData := map[string]any{ - "environment": "staging", - "timestamp": time.Now(), - } - - // Return response with custom headers and metadata - return response.Response{ - Data: map[string]string{"message": "Hello, " + name + "!"}, - Metadata: metaData, - Headers: headers, - }, nil + name := c.Param("name") + if name == "" { + c.Log("Name parameter is empty, defaulting to 'World'") + name = "World" + } + + // Define custom headers (map[string]string) + headers := map[string]string{ + "X-Custom-Header": "CustomValue", + "X-Another-Header": "AnotherValue", + } + + // Define metadata (map[string]any) + metaData := map[string]any{ + "environment": "staging", + "timestamp": time.Now(), + } + + // Return response with custom headers and metadata + return response.Response{ + Data: map[string]string{"message": "Hello, " + name + "!"}, + Metadata: metaData, + Headers: headers, + }, nil } ``` @@ -90,28 +91,28 @@ func HelloHandler(c *gofr.Context) (interface{}, error) { #### Response with Metadata: When metadata is included, the response contains the metadata field: -```json - { - "data": { - "message": "Hello, World!" - }, - "metadata": { - "environment": "staging", - "timestamp": "2024-12-23T12:34:56Z" - } - } - ``` +```json +{ + "data": { + "message": "Hello, World!" + }, + "metadata": { + "environment": "staging", + "timestamp": "2024-12-23T12:34:56Z" + } +} +``` #### Response without Metadata: If no metadata is provided, the response only includes the data field: ```json - { - "data": { - "message": "Hello, World!" - } - } - ``` +{ + "data": { + "message": "Hello, World!" + } +} +``` This functionality offers a convenient, structured way to include additional response information without altering the diff --git a/docs/advanced-guide/using-cron/page.md b/docs/advanced-guide/using-cron/page.md index 8eaec44c3..2502cfed1 100644 --- a/docs/advanced-guide/using-cron/page.md +++ b/docs/advanced-guide/using-cron/page.md @@ -53,18 +53,18 @@ package main import ( "time" - + "gofr.dev/pkg/gofr" ) func main() { - app := gofr.New() + app := gofr.New() // Run the cron job every 5 hours(*/5) app.AddCronJob("* */5 * * *", "", func(ctx *gofr.Context) { ctx.Logger.Infof("current time is %v", time.Now()) }) - + // Run the cron job every 10 seconds(*/10) app.AddCronJob("*/10 * * * * *", "", func(ctx *gofr.Context) { ctx.Logger.Infof("current time is %v", time.Now()) diff --git a/docs/advanced-guide/using-publisher-subscriber/page.md b/docs/advanced-guide/using-publisher-subscriber/page.md index 29b800a1f..19c59c9a5 100644 --- a/docs/advanced-guide/using-publisher-subscriber/page.md +++ b/docs/advanced-guide/using-publisher-subscriber/page.md @@ -112,19 +112,19 @@ KAFKA_BATCH_TIMEOUT=300 #### Docker setup ```shell docker run --name kafka-1 -p 9092:9092 \ - -e KAFKA_ENABLE_KRAFT=yes \ --e KAFKA_CFG_PROCESS_ROLES=broker,controller \ --e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ --e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ --e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ --e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ --e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ --e KAFKA_BROKER_ID=1 \ --e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ --e ALLOW_PLAINTEXT_LISTENER=yes \ --e KAFKA_CFG_NODE_ID=1 \ --v kafka_data:/bitnami \ -bitnami/kafka:3.4 + -e KAFKA_ENABLE_KRAFT=yes \ + -e KAFKA_CFG_PROCESS_ROLES=broker,controller \ + -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ + -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ + -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ + -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ + -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ + -e KAFKA_BROKER_ID=1 \ + -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ + -e ALLOW_PLAINTEXT_LISTENER=yes \ + -e KAFKA_CFG_NODE_ID=1 \ + -v kafka_data:/bitnami \ + bitnami/kafka:3.4 ``` ### GOOGLE @@ -140,8 +140,8 @@ GOOGLE_SUBSCRIPTION_NAME=order-consumer // unique subscription name to identify ```shell docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators docker run --name=gcloud-emulator -d -p 8086:8086 \ - gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators gcloud beta emulators pubsub start --project=test123 \ - --host-port=0.0.0.0:8086 + gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators gcloud beta emulators pubsub start --project=test123 \ + --host-port=0.0.0.0:8086 ``` > **Note**: To set GOOGLE_APPLICATION_CREDENTIAL - refer {% new-tab-link title="here" href="https://cloud.google.com/docs/authentication/application-default-credentials" /%} @@ -169,10 +169,10 @@ MQTT_PASSWORD=password // authentication password #### Docker setup ```shell docker run -d \ - --name mqtt \ - -p 8883:8883 \ - -v /mosquitto.conf:/mosquitto/config/mosquitto.conf \ - eclipse-mosquitto:latest + --name mqtt \ + -p 8883:8883 \ + -v \ + eclipse-mosquitto:latest /mosquitto.conf:/mosquitto/config/mosquitto.conf ``` > **Note**: find the default mosquitto config file {% new-tab-link title="here" href="https://github.com/eclipse/mosquitto/blob/master/mosquitto.conf" /%} @@ -229,12 +229,12 @@ app.AddPubSub(nats.New(nats.Config{ #### Docker setup ```shell docker run -d \ - --name nats \ - -p 4222:4222 \ - -p 8222:8222 \ - -v /nats.conf:/nats/config/nats.conf \ - nats:2.9.16 -``` + --name nats \ + -p 4222:4222 \ + -p 8222:8222 \ + -v \ + nats:2.9.16 /nats.conf:/nats/config/nats.conf +``` #### Configuration Options @@ -273,7 +273,7 @@ Use the `AddPubSub` method of GoFr's app to connect **Example** ```go - app := gofr.New() +app := gofr.New() app.AddPubSub(eventhub.New(eventhub.Config{ ConnectionString: "Endpoint=sb://gofr-dev.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=", @@ -375,7 +375,7 @@ func main() { err := c.Bind(&orderStatus) if err != nil { c.Logger.Error(err) - + // returning nil here as we would like to ignore the // incompatible message and continue reading forward return nil diff --git a/docs/advanced-guide/websocket/page.md b/docs/advanced-guide/websocket/page.md index 7a7a935b0..4dbade1a1 100644 --- a/docs/advanced-guide/websocket/page.md +++ b/docs/advanced-guide/websocket/page.md @@ -59,6 +59,7 @@ package main import ( "time" + "gofr.dev/pkg/gofr" "gofr.dev/pkg/gofr/websocket" ) @@ -67,11 +68,11 @@ func main() { app := gofr.New() wsUpgrader := websocket.NewWSUpgrader( - websocket.WithHandshakeTimeout(5 * time.Second), // Set handshake timeout - websocket.WithReadBufferSize(2048), // Set read buffer size - websocket.WithWriteBufferSize(2048), // Set write buffer size - websocket.WithSubprotocols("chat", "binary"), // Specify subprotocols - websocket.WithCompression(), // Enable compression + websocket.WithHandshakeTimeout(5*time.Second), // Set handshake timeout + websocket.WithReadBufferSize(2048), // Set read buffer size + websocket.WithWriteBufferSize(2048), // Set write buffer size + websocket.WithSubprotocols("chat", "binary"), // Specify subprotocols + websocket.WithCompression(), // Enable compression ) app.OverrideWebSocketUpgrader(wsUpgrader) diff --git a/docs/quick-start/connecting-redis/page.md b/docs/quick-start/connecting-redis/page.md index 7bae564e4..86ea17a12 100644 --- a/docs/quick-start/connecting-redis/page.md +++ b/docs/quick-start/connecting-redis/page.md @@ -16,8 +16,8 @@ You can also set up a development environment with password authentication as de ```bash docker run --name gofr-redis -p 2002:6379 -d \ - -e REDIS_PASSWORD=password \ - redis:7.0.5 --requirepass password + -e REDIS_PASSWORD=password \ + redis:7.0.5 --requirepass password ``` You can set a sample key `greeting` using the following command: diff --git a/docs/quick-start/introduction/page.md b/docs/quick-start/introduction/page.md index f3893b7f5..5451d4500 100644 --- a/docs/quick-start/introduction/page.md +++ b/docs/quick-start/introduction/page.md @@ -28,18 +28,17 @@ package main import "gofr.dev/pkg/gofr" func main() { - // initialise gofr object - app := gofr.New() + // initialise gofr object + app := gofr.New() - // register route greet - app.GET("/greet", func(ctx *gofr.Context) (interface{}, error) { + // register route greet + app.GET("/greet", func(ctx *gofr.Context) (interface{}, error) { + return "Hello World!", nil + }) - return "Hello World!", nil - }) - - // Runs the server, it will listen on the default port 8000. - // it can be over-ridden through configs - app.Run() + // Runs the server, it will listen on the default port 8000. + // it can be over-ridden through configs + app.Run() } ``` diff --git a/docs/quick-start/observability/page.md b/docs/quick-start/observability/page.md index 20783c0f7..a8ba8096f 100644 --- a/docs/quick-start/observability/page.md +++ b/docs/quick-start/observability/page.md @@ -169,7 +169,7 @@ GoFr has support for following trace-exporters: To see the traces install zipkin image using the following Docker command: ```bash - docker run --name gofr-zipkin -p 2005:9411 -d openzipkin/zipkin:latest +docker run --name gofr-zipkin -p 2005:9411 -d openzipkin/zipkin:latest ``` Add Tracer configs in `.env` file, your .env will be updated to @@ -207,11 +207,11 @@ To see the traces install jaeger image using the following Docker command: ```bash docker run -d --name jaeger \ - -e COLLECTOR_OTLP_ENABLED=true \ - -p 16686:16686 \ - -p 14317:4317 \ - -p 14318:4318 \ - jaegertracing/all-in-one:1.41 + -e COLLECTOR_OTLP_ENABLED=true \ + -p 16686:16686 \ + -p 14317:4317 \ + -p 14318:4318 \ + jaegertracing/all-in-one:1.41 ``` Add Jaeger Tracer configs in `.env` file, your .env will be updated to diff --git a/docs/references/context/page.md b/docs/references/context/page.md index 9b46f2abe..41ce4532d 100644 --- a/docs/references/context/page.md +++ b/docs/references/context/page.md @@ -18,70 +18,78 @@ user access to dependencies. parts of the request. - `Context()` - to access the context associated with the incoming request - ```go - ctx.Request.Context() - ``` + +```go +ctx.Request.Context() +``` + - `Param(string)` - to access the query parameters present in the request, it returns the value of the key provided - ```go - // Example: Request is /configs?key1=value1&key2=value2 - value := ctx.Request.Param("key1") - // value = "value1" - ``` + +```go +// Example: Request is /configs?key1=value1&key2=value2 +value := ctx.Request.Param("key1") +// value = "value1" +``` + - `PathParam(string)` - to retrieve the path parameters - ```go - // Consider the path to be /employee/{id} - id := ctx.Request.PathParam("id") - ``` + +```go +// Consider the path to be /employee/{id} +id := ctx.Request.PathParam("id") +``` - `Bind(interface{})` - to access a decoded format of the request body, the body is mapped to the interface provided - ```go - // incoming request body is - // { - // "name" : "trident", - // "category" : "snacks" - // } - - type product struct{ - Name string `json:"name"` - Category string `json:"category"` - } - - var p product - ctx.Bind(&p) - // the Bind() method will map the incoming request to variable p - ``` - -- `Binding multipart-form data / urlencoded form data ` - - To bind multipart-form data or url-encoded form, you can use the Bind method similarly. The struct fields should be tagged appropriately +```go +// incoming request body is +// { +// "name" : "trident", +// "category" : "snacks" +// } + +type product struct{ + Name string `json:"name"` + Category string `json:"category"` +} + +var p product +ctx.Bind(&p) +// the Bind() method will map the incoming request to variable p +``` + +- `Binding multipart-form data / urlencoded form data ` + - To bind multipart-form data or url-encoded form, you can use the Bind method similarly. The struct fields should be tagged appropriately to map the form fields to the struct fields. The supported content types are `multipart/form-data` and `application/x-www-form-urlencoded` - - ```go - type Data struct { - Name string `form:"name"` - Compressed file.Zip `file:"upload"` +```go +type Data struct { + Name string `form:"name"` - FileHeader *multipart.FileHeader `file:"file_upload"` - } - ``` + Compressed file.Zip `file:"upload"` + + FileHeader *multipart.FileHeader `file:"file_upload"` +} +``` - The `form` tag is used to bind non-file fields. - The `file` tag is used to bind file fields. If the tag is not present, the field name is used as the key. - `HostName()` - to access the host name for the incoming request - ```go - // for example if request is made from xyz.com - host := ctx.Request.HostName() - // the host would be http://xyz.com - // Note: the protocol if not provided in the headers will be set to http by default - ``` + +```go +// for example if request is made from xyz.com + host := ctx.Request.HostName() + // the host would be http://xyz.com + // Note: the protocol if not provided in the headers will be set to http by default +``` + - `Params(string)` - to access all query parameters for a given key returning slice of strings. - ```go - // Example: Request is /search?category=books,electronics&category=tech - values := ctx.Request.Params("category") - // values = []string{"books", "electronics", "tech"} - ``` + +```go +// Example: Request is /search?category=books,electronics&category=tech +values := ctx.Request.Params("category") +// values = []string{"books", "electronics", "tech"} +``` ## Accessing dependencies diff --git a/examples/using-file-bind/README.md b/examples/using-file-bind/README.md index 0b10006b8..bcfe6dde4 100644 --- a/examples/using-file-bind/README.md +++ b/examples/using-file-bind/README.md @@ -6,19 +6,19 @@ it to the fields of the struct. GoFr currently supports zip file type and also b ### Usage ```go type Data struct { - Compressed file.Zip `file:"upload"` + Compressed file.Zip `file:"upload"` - FileHeader *multipart.FileHeader `file:"file_upload"` + FileHeader *multipart.FileHeader `file:"file_upload"` } -func Handler (c *gofr.Context) (interface{}, error) { - var d Data - - // bind the multipart data into the variable d - err := c.Bind(&d) - if err != nil { - return nil, err - } +func Handler(c *gofr.Context) (interface{}, error) { + var d Data + + // bind the multipart data into the variable d + err := c.Bind(&d) + if err != nil { + return nil, err + } } ``` diff --git a/examples/using-migrations/readme.md b/examples/using-migrations/readme.md index 4137e4a64..0a81e65d5 100644 --- a/examples/using-migrations/readme.md +++ b/examples/using-migrations/readme.md @@ -9,19 +9,19 @@ This GoFr example demonstrates the use of `migrations` through a simple HTTP ser docker run --name gofr-mysql -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -p 2001:3306 -d mysql:8.0.30 docker run --name gofr-redis -p 2002:6379 -d redis:7.0.5 docker run --name kafka-1 -p 9092:9092 \ --e KAFKA_ENABLE_KRAFT=yes \ --e KAFKA_CFG_PROCESS_ROLES=broker,controller \ --e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ --e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ --e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ --e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ --e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ --e KAFKA_BROKER_ID=1 \ --e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ --e ALLOW_PLAINTEXT_LISTENER=yes \ --e KAFKA_CFG_NODE_ID=1 \ --v kafka_data:/bitnami \ -bitnami/kafka:3.4 + -e KAFKA_ENABLE_KRAFT=yes \ + -e KAFKA_CFG_PROCESS_ROLES=broker,controller \ + -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ + -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ + -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ + -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ + -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ + -e KAFKA_BROKER_ID=1 \ + -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ + -e ALLOW_PLAINTEXT_LISTENER=yes \ + -e KAFKA_CFG_NODE_ID=1 \ + -v kafka_data:/bitnami \ + bitnami/kafka:3.4 ``` - Now run the example using below command : diff --git a/examples/using-publisher/readme.md b/examples/using-publisher/readme.md index 0157de36a..e17f944f9 100644 --- a/examples/using-publisher/readme.md +++ b/examples/using-publisher/readme.md @@ -8,19 +8,19 @@ matching route. - Run the docker image of Kafka and ensure that your provided topics are created before publishing ```console docker run --name kafka-1 -p 9092:9092 \ - -e KAFKA_ENABLE_KRAFT=yes \ --e KAFKA_CFG_PROCESS_ROLES=broker,controller \ --e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ --e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ --e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ --e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ --e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ --e KAFKA_BROKER_ID=1 \ --e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ --e ALLOW_PLAINTEXT_LISTENER=yes \ --e KAFKA_CFG_NODE_ID=1 \ --v kafka_data:/bitnami \ -bitnami/kafka:3.4 + -e KAFKA_ENABLE_KRAFT=yes \ + -e KAFKA_CFG_PROCESS_ROLES=broker,controller \ + -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ + -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ + -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ + -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ + -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ + -e KAFKA_BROKER_ID=1 \ + -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ + -e ALLOW_PLAINTEXT_LISTENER=yes \ + -e KAFKA_CFG_NODE_ID=1 \ + -v kafka_data:/bitnami \ + bitnami/kafka:3.4 ``` - Now run the example using below command : diff --git a/examples/using-subscriber/readme.md b/examples/using-subscriber/readme.md index 550662331..185721d64 100644 --- a/examples/using-subscriber/readme.md +++ b/examples/using-subscriber/readme.md @@ -8,19 +8,19 @@ on the handler response. - Run the docker image of kafka and ensure that your provided topics are created before subscribing. ```console docker run --name kafka-1 -p 9092:9092 \ - -e KAFKA_ENABLE_KRAFT=yes \ --e KAFKA_CFG_PROCESS_ROLES=broker,controller \ --e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ --e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ --e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ --e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ --e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ --e KAFKA_BROKER_ID=1 \ --e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ --e ALLOW_PLAINTEXT_LISTENER=yes \ --e KAFKA_CFG_NODE_ID=1 \ --v kafka_data:/bitnami \ -bitnami/kafka:3.4 + -e KAFKA_ENABLE_KRAFT=yes \ + -e KAFKA_CFG_PROCESS_ROLES=broker,controller \ + -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ + -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ + -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ + -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \ + -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true \ + -e KAFKA_BROKER_ID=1 \ + -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093 \ + -e ALLOW_PLAINTEXT_LISTENER=yes \ + -e KAFKA_CFG_NODE_ID=1 \ + -v kafka_data:/bitnami \ + bitnami/kafka:3.4 ``` - Now run the example using below command :