Posts by remenyic (5)

We created our very first product, a webshops platform

It started because of a pandemic.

Earlier this year (2020), when the first lockdown occured, I noticed a lot of people, desperately, trying to sell their products on facebook groups, obviously a lot of shops where getting less visits and thus getting on the edge of closing down. 

The problem.

The facebook group on which sellers were trying to sell their products, was well received, and a lot of people liked or left a comment that they would like to buy their products instead going to the hypermarket, and the sellers in return would offer themselves to deliver the groceries for free. That was an awesome community effort, however, at times it was really difficult to scroll up and down to find that one shop that actually sold the thing I wanted.

So I thought there should be a better way to navigate the shops, and their products, and just like what facebook offered, all shops should be centralized in one place.

No tool / platform existed for this type of job.

I looked around the web, and beside the big online shops / platforms that were selling their own products mainly, there were nothing that a small town grocery store could use.

So I decided to make such a tool instead.

I really felt the need to work on a "bigger" personal project for quite some time, and this was the "perfect storm" for me.

Right off the bat I knew what my app should enable: accounts, databases, search functionalities, APIs, image processing, etc. And for all that i needed a tech stack.

Frontend:

Angluar, for the simple fact that this project should be able to scale well, and there was a real need to learn a frontend framework.

Backend:

Went with FastAPI for its async apis, and the speed at which it performs. I also considered to use golang, however I only did a few tutorials on it, so it would probably take more time than i actually had available.

Since I`m in full process of learning machine learning, I considered it would be a good ideea to go with python, because this will enable serving machine learning models later, in a easy and fast manner.

About the platform.

Can be seen and tested here.

One could open an account, than fill in a form about his shop and thats it! You now have an online shop.

Next step it would be to go to your shop and manage it!

The platform offers functionalities like adding products, removing them, adding price reductions (sales), and even discount codes.

A normal user could visit the platform, check products from whichever category he wishes, add them to his cart, ingonring the fact that different products may be sold by different sellers, and once the order is placed, each seller would receive his piece of the order to deliver, not knowing that this small order is part of a bigger one.

To this day, this platform is not finished, there is a lot more work to be done, however time constraints left me with no choice but to pause the development on it.

This is just an alpha state.

There`s a ton of awesome features i wish to be added, I had them in mind when i started but time constraints only allowed progress to this point.

The frontend is just a sketch. The plan was to go with it as is, and once it would hit a beta state, I would pay someone for a nice UI.

All the credit goes to the owners of butoane.com

Comments: 0

FastAPI, setting up infrastructure with NGINX, Gunicorn, Uvicorn on Linux

How to setup the infrastructure for multiple FastAPI apps on a single linux machine.

Please keep in mind that this is an example, and probably needs some tweaks before shipping to production, but it should be just fine to help you understand how to run FastAPI microservices without using Docker and/or Kubernetes.

PRE-REQUISITES:

In my example i used RedHat as an OS, NGINX as a webserver and Gunicorn as process manager.

 

Why you might want to do this:

Both Uvicorn and FastAPI provides examples on how to ship a FastAPI app, however it is using Docker as containerized solution, and i found it very dificult to make sense of the documentation on how to deploy my FastAPI apps in a IaaS (Infrastructure as a Service, basicaly and OS which typicaly is a flavor of linux).

My use case:

Lets say i have to different microservices, one must be able to run without the other, but both should open separate enpoints on my DNS. For this example i named my apps as myApp1 and myApp2

/home/myApp1/app1.py

from fastapi import FastAPI

app = FastAPI()

@app.get("/greet")
async def greet():
    return {"message": "Hello World"}

/home/myApp2/app2.py

from fastapi import FastAPI

app = FastAPI()

@app.get("/response")
async def my_api_name():
    return {"message": "This is second app response"}

As shown above i dont want to server the apps on a specific port over TCP, instead i will be using a .sock file. Thus keeping my ports free, serving the app from the memory, and overall improving the response time.

NGINX

My nginx is used as a reverse proxy, and only serving those 2 apps, your config can be way longer than mine, just take what you need from it.

http {
  server {
    listen 80;
    client_max_body_size 4G;

    server_name example.com;

    # This is the url path before nginx for app1,
    # and it will be mapped to the socket served by the upstream
    location /greet {
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_redirect off;
      proxy_buffering off;
      proxy_pass http://app1;
    }

    # This is the url path before nginx for app2,
    # and it will be mapped to the socket served by the upstream
    location /response {
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_redirect off;
      proxy_buffering off;
      proxy_pass http://app2;
    }

    location /static {
      # path for static files
      root /path/to/app/static;
    }
  }

  #this upstream creates a socket for /greet
  upstream uvicorn {
    server unix:/home/myApp1/app1.sock;
  }

  #this upstream creates a scoket for /response
  upstream uvicorn {
    server unix:/home/myApp2/app2.sock;
  }

}

And finally, we want to keep the both apps running at all times, even if the os restarts we want them to restart aswell. For this i choose to use .service files, here`s mine:

/etc/systemd/system/app1.service

# this unit file will create a process that starts at system start-up 
# to keep the app running even if the os restarts

[Unit]
Description=My gunicorn manager for first app
After=network.target

[Service]

# really you should run your processes under root, instead use a user especially made for this task
User=root

# working dir should be a path pointing to the folder containing your main.py
WorkingDirectory=/home/myApp1/

# path to your python env
Environment="PATH=/home/user1/venv/bin/"

# using the process manager (gunicorn) we start uvicorn and bind the socket that nginx is going to use
ExecStart=/home/user1/venv/bin/gunicorn --workers=1 -k uvicorn.workers.UvicornWorker --bind unix:app1.sock -m 000 app1:app

[Install]
WantedBy=multi-user.target

/etc/systemd/system/app2.service

# this unit file will create a process that starts at system start-up 
# to keep the app running even if the os restarts

[Unit]
Description=My gunicorn manager for first app
After=network.target

[Service]

# really you should run your processes under root, instead use a user especially made for this task
User=root

# working dir should be a path pointing to the folder containing your main.py
WorkingDirectory=/home/myApp2/

# path to your python env
Environment="PATH=/home/user1/venv/bin/"

# using the process manager (gunicorn) we start uvicorn and bind the socket that nginx is going to use
ExecStart=/home/user1/venv/bin/gunicorn --workers=1 -k uvicorn.workers.UvicornWorker --bind unix:app1.sock -m 000 app2:app

[Install]
WantedBy=multi-user.target

The complete code snippets can be found at: https://github.com/remenyic/FastAPI_infrastructure

Feel free to adapt the to your use case

Comments: 0

Butoane has been updated to 2.1

         It may not seem like a lot, but now we support code snippets in posts !!!
How awesome is that? From now on we are no longer depending on Gist to show code snippets.
One other thing is that our posts can now contain special characters, fonts, smileys ( laugh ) and much more, thus making our posts feel much more alive!!!
One other thing that was added in this update is option to leave a comment! So there you go, if by any chance you have questions in regards to our posts you can now voice them!
To leave a comment you first need to select the give post by clicking on its title, then you will be redirected to post details where the: "leave comment" button should appear.

So whats next?

We were thinking to start building a webplatform for online shops where everyone could open one. This project would be designed for artisans and people that hand-craft items and want to sell them.

Or!

Create a web-based game, not a lot of thinking went into this ideea, but remains open.
Here would be an example of a code snippet in action.

import random

print(random.choice(['webshop', 'browser-game'])

 

Comments: 0

Spider to crawl sites and grab urls -- made with python

"""
REQUIREMENTS:
Python 3+
requests-html: https://pypi.org/project/requests-html/
pip install requests-html
"""
from urllib.parse import urlparse
from requests_html import HTMLSession

'''
Spider class takes 1 argumet, which is the website uri
'''
class Spider():
    session = HTMLSession()
    website = None
    to_crawl = []
    crawled = []

    # initializes the spider
    def __init__(self, site_url):
        self.website = site_url

    # self calling method which crawls website uri, collects and validates links
    # to have same base uri as the main website and adds them to to_crawl list
    # after job is done, method calls itself in an attempt to empty the to_crawl list
    # by moving links from to_crawl to crawled list and crawls them
    def get_links(self, **kwargs):
        if 'link' in kwargs.keys():
            link = kwargs['link']
            print('Crawling: ', link)
        else: link = self.website

        #initializer for current request session
        web_session = self.session.get(link)
        links = web_session.html.absolute_links

        if link in self.to_crawl:
            idx = self.to_crawl.index(link)
            self.crawled.append(self.to_crawl.pop(idx))
        for l in links:
            if l in self.crawled:
                continue
            else:
                parsed_link = urlparse(l)
                if parsed_link.netloc == urlparse(self.website).netloc:
                    self.to_crawl.append(l)
                    # writing each link to a .txt file so they can be used later
                    with open("links.txt", "a") as savefile:
                        savefile.write(l + '\n')
        print('Links to crawl: ', len(self.to_crawl))
        print('Crawled links: ', len(self.crawled))
        print('----------------------')
        self.to_crawl = list(set(self.to_crawl))
        # as long 'to_crawl' is not empty, 
        # it will keep crawling first link it finds in
        if len(self.to_crawl) > 0:
            self.get_links(link=self.to_crawl[0])


r = Spider('https://www.example.com/')
r.get_links()

Created a python class object that initializes itself with a valid url and than it climbs it to collect other urls found on the website. The urls will be saved in the links.txt which the spider will created in the folder where is ran. Spider to crawl a given website and collect urls. A collected url will be saved in links.txt if it has the same domain name as the main website, this is to contain the spider on the give website and avoid reaching other sites, eg: facebook, twitter, etc. To start the spider, a link must given which should contain both: the transfer protocol and domain name. Code to github repo: https://github.com/remenyic/butoane_public/blob/master/url_spider.py

 

 

Comments: 0

Butoane is version 2!

Even though the website was live since last summer, it was more like a side-project and a proof for ourselves that it can be done. Now that the covid-19 locked everyone indoors, we decided to make the best of it and take this website more seriously. With that thought in mind, we created the search engine part of this website which implements some nlp algorithms and the newly added skill: elasticsearch. What we have in mind, starting yesterday, is to get involved into open source projects and with python as the main tool to develop apps that everyone can use and probably learn more about python. More blog posts will come, soon.

Comments: 0