Skip to content

Goal: Build a predictive model to determine if a passenger survived the Titanic disaster using features like age, gender, and class. The model aims to find key factors affecting survival, offering accurate predictions. This analysis showcases data-driven insights for better decision-making in similar scenarios.

License

Notifications You must be signed in to change notification settings

mucahitmoonstar/Titanic-Survival-Prediction

Repository files navigation

Dataset Description: "The sinking of the Titanic is one of the most infamous shipwrecks in history.

On April 15, 1912, during her maiden voyage, the widely considered “unsinkable” RMS Titanic sank after colliding with an iceberg. Unfortunately, there weren’t enough lifeboats for everyone on board, resulting in the death of 1502 out of 2224 passengers and crew.

While there was some element of luck involved in surviving, it seems some groups of people were more likely to survive than others.

In this challenge, we ask you to build a predictive model that answers the question: “what sorts of people were more likely to survive?” using passenger data (ie name, age, gender, socio-economic class, etc).

Acknowledgements: This dataset has been referred from Kaggle: https://www.kaggle.com/c/titanic/data.

Objective: Understand the Dataset & cleanup (if required). Build a strong classification model to predict whether the passenger survives or not. Also fine-tune the hyperparameters & compare the evaluation metrics of various classification algorithms."

titanicdatasetpicture

GOAL: "The main objective is to develop a predictive model that determines whether a passenger survived the Titanic disaster using various features, including age, gender, and class. The aim is to explore patterns and relationships that influenced survival odds. This model will not only predict outcomes accurately but also highlight the most critical factors impacting survival. By analyzing the Titanic dataset, this project demonstrates how data-driven methods can provide valuable insights into historical events and inform decision-making in similar future scenarios. The final model seeks to achieve high accuracy, offering reliable predictions based on the provided dataset."

ALGORİTHM:

Support Vector Machine (SVM) Description: SVM finds a hyperplane that best separates the classes. It's effective for high-dimensional spaces and when there's a clear margin of separation. Pros: Good with smaller datasets and non-linear boundaries. Cons: Can be slow with large datasets and sensitive to feature scaling. Support Vector Machine Evaluation: [[90 19] [18 51]] precision recall f1-score support

       0       0.83      0.83      0.83       109
       1       0.73      0.74      0.73        69

accuracy                           0.79       178

macro avg 0.78 0.78 0.78 178 weighted avg 0.79 0.79 0.79 178

SVM Accuracy: 0.7921348314606742

K-Nearest Neighbors (KNN) Description: KNN is a simple, instance-based learning algorithm that assigns a class to a sample based on the majority class of its nearest neighbors. Pros: Simple to implement and interpret, no training phase. Cons: Sensitive to feature scaling and the choice of K.

K-Nearest Neighbors Evaluation: [[89 20] [47 22]] precision recall f1-score support

       0       0.65      0.82      0.73       109
       1       0.52      0.32      0.40        69

accuracy                           0.62       178

macro avg 0.59 0.57 0.56 178 weighted avg 0.60 0.62 0.60 178

KNN Accuracy: 0.6235955056179775

AdaBoost (Adaptive Boosting) Description: AdaBoost is an ensemble method that combines multiple weak learners (usually decision trees) into a strong learner by focusing on hard-to-predict instances. Pros: Improves accuracy by focusing on misclassified instances, reduces overfitting. Cons: Sensitive to noisy data and outliers

AdaBoost Classifier Evaluation: [[86 23] [13 56]] precision recall f1-score support

       0       0.87      0.79      0.83       109
       1       0.71      0.81      0.76        69

accuracy                           0.80       178

macro avg 0.79 0.80 0.79 178 weighted avg 0.81 0.80 0.80 178

AdaBoost Accuracy: 0.797752808988764

XGBoost (Extreme Gradient Boosting) Description: An advanced implementation of gradient boosting that uses tree boosting algorithms. It’s known for its speed and high performance. Pros: High predictive power, handles missing data, strong regularization. Cons: Complex to tune, higher memory consumption.

XGBoost Classifier Evaluation: [[90 19] [21 48]] precision recall f1-score support

       0       0.81      0.83      0.82       109
       1       0.72      0.70      0.71        69

accuracy                           0.78       178

macro avg 0.76 0.76 0.76 178 weighted avg 0.77 0.78 0.77 178

XGBoost Accuracy: 0.7752808988764045

ANN (Artificial Neural Networ) An Artificial Neural Network (ANN) is a computational model inspired by the human brain. It consists of layers of interconnected nodes called neurons, which work together to recognize patterns in data. The network typically has three main types of layers: Input Layer: Receives the raw data inputs (features). Hidden Layers: Intermediate layers where neurons process data, learning complex patterns and relationships through weighted connections. Output Layer: Produces the final prediction or classification result. ANNs learn by adjusting the weights of connections using an optimization algorithm like backpropagation, which minimizes the error between predicted and actual outputs. They are especially effective for handling nonlinear relationships, complex data, and pattern recognition tasks, making them popular for applications like image recognition, language processing, and classification problems.

Classification Report: precision recall f1-score support

       0       0.79      0.91      0.85       105
       1       0.84      0.66      0.74        74

accuracy                           0.81       179

macro avg 0.82 0.79 0.80 179 weighted avg 0.81 0.81 0.81 179

ANN accuracy :0.81

Random Forest; Random Forest is an ensemble machine learning algorithm that consists of multiple decision trees. It combines the output of several trees to make a final prediction, which makes it more robust and accurate. Here's a breakdown: How it Works: It creates a "forest" of decision trees, each trained on different random subsets of data and features. For classification, the final prediction is based on the majority vote (most common output) from all the trees. For regression, it averages the predictions. Advantages: It reduces overfitting (compared to a single decision tree), handles missing values, and can manage large datasets with high dimensionality. Applications: Random Forest is widely used for classification and regression tasks, such as predicting customer churn, disease diagnosis, and financial forecasting.

Random Forest Classifier Evaluation: [[92 17] [19 50]] precision recall f1-score support

       0       0.83      0.84      0.84       109
       1       0.75      0.72      0.74        69

accuracy                           0.80       178

macro avg 0.79 0.78 0.79 178 weighted avg 0.80 0.80 0.80 178

Random Forest Accuracy: 0.797752808988764

Logistic Regression: Logistic Regression is a statistical model used for binary classification, predicting the probability of a binary outcome (0 or 1). How it Works: It uses a linear equation to predict the probability of a class, applying the sigmoid function to constrain the output between 0 and 1. Based on a threshold (usually 0.5), the prediction is classified into one of the two categories. Advantages: It's simple, easy to interpret, requires less computational power, and works well with linearly separable data. Applications: Logistic Regression is used for binary classification tasks such as spam detection, credit scoring, and predicting medical conditions.

Logistic Regression Evaluation: [[87 22] [16 53]] precision recall f1-score support

       0       0.84      0.80      0.82       109
       1       0.71      0.77      0.74        69

accuracy                           0.79       178

macro avg 0.78 0.78 0.78 178 weighted avg 0.79 0.79 0.79 178

Logistic Regression Accuracy: 0.7865168539325843

RESULT:

An Artificial Neural Network (ANN) may outperform algorithms like SVM, KNN, AdaBoost, XGBoost, Random Forest, and Logistic Regression in certain scenarios due to several unique strengths:

1. Handling Nonlinear Relationships

  • ANNs are inherently good at capturing complex, nonlinear patterns in the data because of their multi-layer architecture and nonlinear activation functions.
  • Algorithms like Logistic Regression and SVM are primarily linear classifiers unless kernels (e.g., SVM with a nonlinear kernel) are added, which can make them more complex and harder to tune.

2. Feature Interaction

  • ANNs can automatically learn and represent intricate interactions between features without explicit feature engineering. Hidden layers capture these complex relationships, which might be challenging for models like KNN and Logistic Regression.
  • Other algorithms like Random Forest and XGBoost can handle interactions, but they may not always perform as well on highly nonlinear data, depending on the feature set.

3. Scalability and Adaptability

  • ANNs are highly scalable and adaptable to a variety of problems, including classification, regression, time-series prediction, and more.
  • While KNN is simple, its performance can degrade with large datasets due to the computational complexity of distance calculations. Algorithms like AdaBoost and Random Forest may face challenges with scalability, especially as feature space grows.

4. Handling High Dimensional Data

  • ANNs perform well with high-dimensional data because of their flexible architecture and ability to reduce dimensions implicitly during training. They are robust even when the dataset has many features or complex data structures.
  • Traditional models like SVM can struggle with high-dimensional data, requiring careful feature selection or engineering, while Random Forest can become slow and memory-intensive.

5. End-to-End Learning and Flexibility

  • ANNs allow for end-to-end learning, meaning raw data can be fed directly into the model, reducing the need for manual preprocessing. This is beneficial when dealing with complex, unstructured data like images, audio, or raw sensor inputs.
  • Other models like Logistic Regression and KNN often require significant feature preprocessing to perform well.

6. Regularization Techniques

  • ANNs offer powerful regularization techniques like Dropout, Batch Normalization, and Early Stopping, which help control overfitting and enhance generalization to new data.
  • While techniques like Boosting (AdaBoost, XGBoost) inherently reduce overfitting by adjusting model weights iteratively, ANNs provide more direct control over regularization.

7. Handling Noisy and Incomplete Data

  • ANNs can be robust to noisy data and can learn patterns even with partially missing features due to the flexibility of the hidden layers. Proper training and dropout techniques help ANNs handle noise better.
  • Algorithms like SVM and Logistic Regression can be sensitive to noise, while KNN might get negatively affected by outliers and noisy data, impacting accuracy.

8. Training and Learning Efficiency

  • ANNs use backpropagation to update weights efficiently, learning from errors iteratively and adjusting parameters accordingly. This method allows ANNs to optimize feature weights with more precision.
  • Algorithms like Random Forest and Boosting ensembles (AdaBoost, XGBoost) also have iterative learning processes, but they might require more careful hyperparameter tuning (e.g., depth, learning rate) to get optimal results.

9. Ability to Learn Deep Hierarchies

  • With deeper architectures, ANNs can learn complex hierarchical representations that other algorithms can't. Each hidden layer adds an abstraction layer, making it easier to learn higher-level features that improve classification.
  • Traditional algorithms like Logistic Regression lack the capability to learn hierarchies, while SVM, KNN, and Random Forest don’t inherently build deep hierarchies unless specifically tuned.

Summary Comparison

Aspect ANN Strength Alternative Algorithms
Nonlinear Relationships Captures nonlinearity well with hidden layers SVM (limited unless kernels), Logistic Regression (linear)
Feature Interaction Automatically captures complex interactions Random Forest & XGBoost (require manual tuning)
Scalability Highly scalable with large data and features KNN (poor with large data), AdaBoost (can slow down with complexity)
High Dimensional Data Handles high-dimensional data efficiently SVM, Logistic Regression (struggles without dimensionality reduction)
Regularization Dropout, Batch Normalization, Early Stopping available Boosting (inherent but less direct control)
Noise Tolerance Better with noisy/incomplete data SVM, KNN, and Logistic Regression sensitive to noise
Deep Hierarchies Learns hierarchical data representations Random Forest & XGBoost (less deep learning capability)

Conclusion

ANNs provide a higher degree of flexibility, the ability to learn complex relationships, and a range of powerful techniques to control overfitting. These strengths often make them superior to traditional machine learning algorithms, especially when dealing with non-linear, high-dimensional, or noisy data, and scenarios requiring deep feature hierarchies. However, they also require careful tuning, longer training times, and more computational resources, making the choice of algorithm context-dependent.

About

Goal: Build a predictive model to determine if a passenger survived the Titanic disaster using features like age, gender, and class. The model aims to find key factors affecting survival, offering accurate predictions. This analysis showcases data-driven insights for better decision-making in similar scenarios.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages