Mapping Bangladesh’s Seismic Risks
Through a wide variety of mobile applications, we’ve developed a unique visual system.
- Software ArcGIS Pro
- Timeframe 2001-2025
- Data USGS
- Study Area Bangladesh
Skilled in Python, R, and GIS software like ArcGIS Pro, ArcMap, QGIS, and Erdas Imagine, with practical experience in analyzing environmental data. Eager to learn from experts and contribute to impactful research that promotes sustainability and addresses global climate challenges.
Through a wide variety of mobile applications, we’ve developed a unique visual system.
Using USGS earthquake data, I created a seismic risk map of Bangladesh, classifying zones from very low to very high risk. This helps visualize earthquake vulnerability for disaster preparedness and urban planning. A critical tool for policymakers, researchers, and communities!
What It Shows:
For high quality image visit this link
How to Make One for Yourself:
Data Collection:
Shape file Prep:
“Mapping risks today ensures safer cities tomorrow.”
Interpolation in ArcGIS Pro:
Risk Classification:
Design & Export:
Need Help?
If you’re interested in replicating this project or need guidance for your region, feel free to reach out! I’m happy to share workflows, data sources, or troubleshoot GIS challenges. Let’s collaborate to build resilient communities. 🌍✨
Email: official.parvej.hossain@gmail.com
Through a wide variety of mobile applications, we’ve developed a unique visual system.
I recently created a unique hydrological basin map of Bangladesh using ArcGIS Pro—a project that combines data precision with creative visualization. Starting with raw hydrological data (watersheds, rivers, lakes, and DEM), I processed and symbolized each layer to highlight the intricate water systems of the region. What makes this project stand out is its focus on clarity and usability, blending aesthetic design with technical accuracy. This map not only showcases Bangladesh’s hydrological features but also serves as a valuable resource for environmental planning and research. Dive into the details of how I brought this vision to life!
How I built it (in a nutshell):
1️⃣ Data Collection: Sourced global hydrology datasets (HydroSHEDS, HydroRIVERS) and a 30m-resolution Digital Elevation Model (DEM) from OpenTopography.
2️⃣ GIS Processing: Merged basin polygons, clipped DEM/river layers to Bangladesh’s extent, and classified river orders (1-6) for dynamic symbology.
3️⃣ Visualization: Applied custom glow effects for rivers, labeled key cities/lakes, and designed a sleek layout with hillshading for topographic depth.
4️⃣ Validation: Cross-referenced with satellite imagery and local hydrology reports for accuracy.
“Where Hydrology Meets Art: A Technically Robust, Visually Stunning Basin Map”
Key Insights and Outcomes for Portfolio
Through a wide variety of mobile applications, we’ve developed a unique visual system.
This project presents a high-resolution soil type map of Bangladesh using FAO’s global soil dataset and the DOMSOI classification system. Developed in ArcGIS Pro, the map highlights the spatial variability of soils and supports evidence-based decision-making for sustainable agriculture, climate resilience, and environmental planning. The result is a visually compelling and data-rich tool useful for urban planners, agronomists, and GIS professionals.
How I built it (in a nutshell):
“Where data meets dirt, insights grow.”
Through a wide variety of mobile applications, we’ve developed a unique visual system.
This project visualizes Bangladesh’s international flight routes on a dynamic Leaflet map using interactive, curved spatial lines and airport points. It features multi-layer map styles, smooth hover effects with detailed popups, and a legend highlighting key airports. A live flight radar widget is embedded to provide real-time air traffic data over Bangladesh, making the map informative and engaging. This tool is a powerful visualization for aviation analysis, travel planning, and geospatial storytelling.
How I built it (in a nutshell):
Install needed packages by running:
Install R and RStudio (if not installed).
install.packages(c("tidyverse", "sf", "leaflet", "htmltools", "htmlwidgets", "viridis", "geosphere"))
sf::st_read()
.read.csv()
.Country == "Bangladesh"
).“This Map is Built using the help of AI. Use AI if you can controll the command.”
geosphere::gcIntermediate()
to create smooth curved paths between the origin and destination airports.sf
spatial lines (st_linestring
).htmlwidgets::onRender()
to:labelOptions(sticky = TRUE)
on airports to avoid flickering hover popups.leaflet::addControl()
to add a bold title at the top center.leaflet::addLegend()
to show color-coded BD airports with full names.htmlwidgets::saveWidget()
.Experiment with other airport or flight data sources.
Adjust hover sensitivity by adding invisible, thicker lines under routes.
Use sticky labels to smooth the airport name hover.
Customize colors and line styles with colorFactor()
and dashArray
.
Explore more Leaflet providers for different map styles.
The Aspire Leaders Program is a transformative leadership initiative for underserved youth worldwide. It offers mentorship, networking, and resources to develop skills and unlock potential for global impact. This program has helped me develop Strategic Thinking, Communication, Networking, Public Speaking, Problem Solving, and Policy Development.
R provides built-in functions for statistical analysis:
summary()
: Summary statistics (min, max, quartiles, mean).
sum()
: Total of values.
range()
: Minimum and maximum.
var()
: Variance.
sd()
: Standard deviation.
# Basic dataset
data_basic <- c(2, 4, 6, 8, 10)
# Advanced dataset (mtcars)
data(mtcars)
mpg <- mtcars$mpg
# BASIC TASKS
# HW1: Calculate the sum of data_basic
# HW2: Find the range (min and max) of data_basic
# HW3: Compute the variance of data_basic
# ADVANCED TASKS
# HW4: Calculate the standard deviation of mtcars$mpg
# HW5: Generate a summary of mtcars$hp (horsepower)
# BASIC SOLUTIONS
sum_basic <- sum(data_basic)
range_basic <- range(data_basic)
var_basic <- var(data_basic)
# ADVANCED SOLUTIONS
sd_mpg <- sd(mpg)
summary_hp <- summary(mtcars$hp)
Mean : Average (mean()
).
Median : Middle value (median()
).
Mode : Most frequent value (no built-in function—custom code required).
# Basic dataset
data_numbers <- c(1, 2, 2, 3, 4, 5, 5, 5)
# Advanced dataset (iris)
data(iris)
sepal_length <- iris$Sepal.Length
# BASIC TASKS
# HW1: Calculate the mean of data_numbers
# HW2: Find the median of data_numbers
# HW3: Write a function to compute the mode
# ADVANCED TASKS
# HW4: Compute the mean of iris$Sepal.Length
# HW5: Find the median of iris$Petal.Length grouped by Species
# BASIC SOLUTIONS
mean_val <- mean(data_numbers)
median_val <- median(data_numbers)
mode_func <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
mode_val <- mode_func(data_numbers)
# ADVANCED SOLUTIONS
mean_sepal <- mean(sepal_length)
median_petal <- aggregate(Petal.Length ~ Species, iris, median)
max()
/min()
: Extreme values.
quantile()
: Percentiles (e.g., 25th, 50th).
IQR()
: Interquartile range.
# Basic dataset
data_scores <- c(45, 67, 89, 34, 56, 78, 90, 23)
# Advanced dataset (airquality)
data(airquality)
temp <- airquality$Temp
# BASIC TASKS
# HW1: Find the max and min of data_scores
# HW2: Calculate the 75th percentile of data_scores
# HW3: Compute the IQR of data_scores
# ADVANCED TASKS
# HW4: Find the 90th percentile of airquality$Temp
# HW5: Identify outliers in airquality$Ozone using IQR
# BASIC SOLUTIONS
max_score <- max(data_scores)
min_score <- min(data_scores)
percentile_75 <- quantile(data_scores, 0.75)
iqr_score <- IQR(data_scores)
# ADVANCED SOLUTIONS
percentile_90 <- quantile(temp, 0.90)
# Outlier detection (IQR method)
q1 <- quantile(airquality$Ozone, 0.25)
q3 <- quantile(airquality$Ozone, 0.75)
iqr <- IQR(airquality$Ozone)
outliers <- airquality$Ozone[airquality$Ozone < (q1 - 1.5*iqr) | airquality$Ozone > (q3 + 1.5*iqr)]
Perform t-tests (t.test()
), ANOVA (aov()
), and chi-square tests (chisq.test()
) to compare groups.
# Create sample data
group_a <- c(20, 22, 19, 18, 24)
group_b <- c(25, 24, 22, 23, 20)
# HW1: Perform an independent t-test between group_a and group_b
# HW2: Run a one-way ANOVA on `mtcars` to compare `mpg` across cylinder groups
# HW1
t.test(group_a, group_b)
# HW2
cyl_groups <- split(mtcars$mpg, mtcars$cyl)
anova_result <- aov(mpg ~ factor(cyl), data=mtcars)
summary(anova_result)
Fit linear (lm()
) and logistic regression models. Use summary()
to interpret coefficients and p-values.
# Use `mtcars` for linear regression
# HW1: Fit a linear model predicting `mpg` from `wt` and `hp`
# HW2: Check the R-squared value of the model
# HW1
model <- lm(mpg ~ wt + hp, data=mtcars)
# HW2
summary(model)$r.squared
Recode variables with dplyr::mutate()
and case_when()
. Create new variables using arithmetic/logical operations.
# Create sample data
df <- data.frame(
age = c(18, 25, 30, 35, 40),
income = c(50000, 60000, 75000, 90000, 120000)
)
# HW1: Recode `age` into categories: "<25", "25-35", ">35"
# HW2: Create a new variable `income_group` (Low: <70k, High: >=70k)
# HW1
library(dplyr)
df <- df %>%
mutate(age_group = case_when(
age < 25 ~ "<25",
age >= 25 & age <= 35 ~ "25-35",
age > 35 ~ ">35"
))
# HW2
df <- df %>%
mutate(income_group = ifelse(income >= 70000, "High", "Low"))
Export tables and plots using write.csv()
, stargazer
, or flextable
.
# HW1: Save `mtcars` summary to a CSV
# HW2: Export a ggplot to PNG
# HW1
write.csv(summary(mtcars), "mtcars_summary.csv")
# HW2
ggsave("plot.png", plot=last_plot())
Skewness : Measure of asymmetry (moments
package).
Kurtosis : Tailedness of the distribution (moments
package).
Covariance : cov()
.
Correlation : cor()
.
# Advanced dataset (cars)
data(cars)
speed <- cars$speed
dist <- cars$dist
# HW1: Calculate covariance between speed and distance
# HW2: Compute correlation between speed and distance
# HW3: Install the `moments` package and calculate skewness of speed
# HW1
covariance <- cov(speed, dist)
# HW2
correlation <- cor(speed, dist)
# HW3
library(moments)
skewness_speed <- skewness(speed)
Use na.rm = TRUE
to ignore NA
values in calculations.
data_missing <- c(1, 2, NA, 4, 5)
# HW1: Calculate the mean of data_missing (ignore NA)
# HW2: Check if data_missing contains any NA values
mean_missing <- mean(data_missing, na.rm = TRUE)
has_na <- anyNA(data_missing)
Plots visualize trends and relationships. Use plot()
for basic graphs, lines()
/points()
for overlays, and par()
for layouts. Customize with main
, xlab
, ylab
, col
, lwd
, pch
, and bg
.
# Simple dataset for beginners
x <- 1:10
y <- c(2, 4, 6, 8, 7, 5, 3, 1, 9, 10)
# Complex dataset: mtcars (built-in)
data(mtcars)
mpg <- mtcars$mpg # Miles per gallon
hp <- mtcars$hp # Horsepower
wt <- mtcars$wt # Weight
# BASIC TASKS
# HW1: Plot x vs. y as points
# HW2: Add a blue line to the plot
# HW3: Create a plot with red triangles (pch=17)
# ADVANCED TASKS
# HW4: Plot mpg vs. hp from mtcars, add a smooth line
# HW5: Create a multi-plot layout (2x2 grid)
# HW6: Customize mpg vs. wt plot: title, axis labels, green points, gray background
# BASIC SOLUTIONS
plot(x, y)
plot(x, y, type="l", col="blue")
plot(x, y, pch=17, col="red")
# ADVANCED SOLUTIONS
plot(mpg ~ hp, data=mtcars, main="MPG vs Horsepower", col="purple")
lines(lowess(hp, mpg), col="orange")
par(mfrow=c(2,2))
plot(mpg ~ hp, data=mtcars)
plot(mpg ~ wt, data=mtcars)
hist(mpg, col="lightblue")
boxplot(mpg, col="yellow")
plot(mpg ~ wt, data=mtcars,
main="Weight vs MPG",
xlab="Weight (1000 lbs)",
ylab="Miles Per Gallon",
pch=21, bg="green",
panel.first=grid())
Pie charts show proportions. Use pie()
, legend()
, and ifelse()
for conditional formatting.
# Simple sales data
sales <- c(25, 35, 40)
labels <- c("Apparel", "Electronics", "Groceries")
# Complex dataset: Titanic survival rates
survivors <- c(203, 118, 178, 528)
groups <- c("1st Class", "2nd Class", "3rd Class", "Crew")
# BASIC TASKS
# HW1: Create a pie chart for sales data
# HW2: Add a title and explode the "Groceries" slice
# ADVANCED TASKS
# HW3: Plot Titanic survival rates with gradient colors
# HW4: Add a legend and percentage labels
# HW5: Create a 3D pie chart (use plotrix package)
# BASIC SOLUTIONS
pie(sales, labels=labels)
pie(sales, labels=labels, main="Sales Distribution", explode=0.1)
# ADVANCED SOLUTIONS
# Install the plotrix package (only needed once)
install.packages("plotrix")
# Load the package
library(plotrix)
library(plotrix)
pie3D(survivors, labels=groups, main="Titanic Survival Rates",
col=heat.colors(4), explode=0.05)
pie(survivors, labels=paste(groups, " (", round(survivors/sum(survivors)*100, 1), "%)", sep=""),
col=rainbow(4))
legend("right", groups, fill=rainbow(4))
Bar charts compare categories. Use barplot()
, beside=TRUE
for grouped bars, and col
for gradients.
# Monthly sales
months <- c("Jan", "Feb", "Mar")
sales <- c(200, 450, 300)
# Complex dataset: Olympic medal counts
countries <- c("USA", "China", "Russia", "UK")
gold <- c(39, 38, 20, 22)
silver <- c(41, 32, 28, 21)
bronze <- c(33, 18, 23, 22)
# BASIC TASKS
# HW1: Create a vertical bar chart for monthly sales
# HW2: Add grid lines and rotate labels
# ADVANCED TASKS
# HW3: Create stacked bars for Olympic medals
# HW4: Create grouped bars with legends
# HW5: Add error bars using arrows()
# BASIC SOLUTIONS
barplot(sales, names.arg=months, main="Monthly Sales", xlab="Month", ylab="Revenue")
barplot(sales, names.arg=months, las=2, cex.names=0.8, col="lightgreen")
abline(h=seq(0, 500, by=100), lty=2)
# ADVANCED SOLUTIONS
medals <- rbind(gold, silver, bronze)
barplot(medals, names.arg=countries, col=c("gold", "silver", "darkorange"),
legend=rownames(medals), main="Olympic Medals", beside=TRUE)
# Error bars
barplot(gold, names.arg=countries, ylim=c(0, 50), col="gold")
arrows(x0=1:4, y0=gold-2, y1=gold+2, code=3, angle=90, length=0.1)
Vectors are 1D data structures holding elements of the same type .
Sort : sort()
(ascending) or rev(sort())
(descending).
Create : Use c()
, seq(from, to, by)
, or rep(value, times)
.
Access : Use [index]
(positive/negative), logical vectors, or names.
Length : length(vector)
.
# HW1: Create a vector of even numbers 2, 4, 6 using `seq()`
# HW2: Access the 3rd element of `c(10, 20, 30, 40)`
# HW3: Sort `c(5, 1, 3)` in descending order
# HW4: Check the length of `c("a", "b", "c")`
# HW5: Create a vector with 3 copies of "R" using `rep()`
# HW1
vec_seq <- seq(2, 6, by=2) # Output: 2, 4, 6
# HW2
print(c(10, 20, 30, 40)[3]) # Output: 30
# HW3
sorted <- rev(sort(c(5, 1, 3))) # Output: 5, 3, 1
# HW4
print(length(c("a", "b", "c"))) # Output: 3
# HW5
vec_rep <- rep("R", 3) # Output: "R", "R", "R"
Lists store mixed or nested data .
Add/Remove : list[[new_index]] <- value
or list[index] <- NULL
.
Create : list()
.
Access : [index]
(returns sublist), [[index]]
(returns element), or $name
.
Modify : Assign new values via [[ ]]
or append()
.
# HW1: Create a list with "apple", 25, and a sub-list `c(1, 2)`
# HW2: Access the sub-list `c(1, 2)` from HW1
# HW3: Change "apple" to "banana" in the list
# HW4: Add `TRUE` to the end of the list
# HW5: Remove the 2nd element (25)
# HW1
my_list <- list("apple", 25, list(1, 2))
# HW2
print(my_list[[3]]) # Output: 1, 2
# HW3
my_list[[1]] <- "banana"
# HW4
my_list <- append(my_list, TRUE)
# HW5
my_list[2] <- NULL
Matrices are 2D, same-type data structures.
Generate Values : seq()
, rep()
.
Create : matrix(data, nrow, ncol)
.
Access : [row, col]
, [,]
for entire rows/columns.
Add Rows/Columns : rbind()
, cbind()
.
# HW1: Create a 3x2 matrix with 1-6 using `matrix()`
# HW2: Extract the 2nd row
# HW3: Extract the 1st column
# HW4: Add a row `7, 8` to the matrix
# HW5: Create a matrix with 1, 2 repeated 3 times using `rep()`
# HW1
mat <- matrix(1:6, nrow=3)
# HW2
print(mat[2, ]) # Output: 2, 5
# HW3
print(mat[, 1]) # Output: 1, 2, 3
# HW4
mat <- rbind(mat, c(7, 8))
# HW5
mat_rep <- matrix(rep(1:2, 3), nrow=3)
Arrays extend matrices to multi-dimensional data .
Dimensions : dim()
to check or set dimensions.
Create : array(data, dim=c(rows, cols, ...))
.
Access : [i, j, k]
for specific elements.
# HW1: Create a 2x2x2 array with values 1-8
# HW2: Access the 3rd element of the 1st layer
# HW3: Extract the 2nd layer (all rows/columns)
# HW4: Check the total length of the array
# HW5: Convert a vector `1:12` into a 3x4 array
# HW1
arr <- array(1:8, dim=c(2, 2, 2))
# HW2
print(arr[3]) # Output: 3
# HW3
print(arr[, , 2])
# HW4
print(length(arr)) # Output: 8
# HW5
arr_3d <- array(1:12, dim=c(3, 4))
Data frames store tabular data (mixed types allowed).
Modify : Add/remove columns via $
or [ ]
.
Create : data.frame()
.
Access : $column
, [, "column"]
, or subset()
.
# HW1: Create a data frame with Name (Alice, Bob), Age (25, 30)
# HW2: Access the "Name" column using `$`
# HW3: Add a column "Salary" with 5000, 6000
# HW4: Remove the "Age" column
# HW5: Check the number of rows
# HW1
df <- data.frame(Name=c("Alice", "Bob"), Age=c(25, 30))
# HW2
print(df$Name) # Output: Alice, Bob
# HW3
df$Salary <- c(5000, 6000)
# HW4
df$Age <- NULL
# HW5
print(nrow(df)) # Output: 2
Factors store categorical data with predefined levels.
Ordered Factors : ordered=TRUE
for ranking.
Create : factor()
.
Modify Levels : levels()
, factor(..., levels=)
.
# HW1: Create a factor with "Low", "Medium", "High"
# HW2: Check the levels of the factor
# HW3: Add "Very High" as a new level
# HW4: Remove "Medium" from the factor
# HW5: Convert the factor to an ordered factor
# HW1
f <- factor(c("Low", "Medium", "High"))
# HW2
print(levels(f)) # Output: "High", "Low", "Medium"
# HW3
f <- factor(f, levels=c("Low", "Medium", "High", "Very High"))
# HW4
f <- f[f != "Medium"]
# HW5
f_ordered <- factor(f, ordered=TRUE)
I am available for impactful research that promotes sustainability and addresses global cliate change and environment.
Phone: +880 1714 594091 Email: official.parvej.hossain@gmail.com