Pārlūkot izejas kodu

polishing

master
Andreas Demmelbauer pirms 5 gadiem
vecāks
revīzija
4025c6f580
1 mainītis faili ar 5 papildinājumiem un 2 dzēšanām
  1. +5
    -2
      README.md

+ 5
- 2
README.md Parādīt failu

@@ -23,20 +23,22 @@ news pages
### What it does
* Fetching the news feed from the original website
* scrape contents of new entries and save them into a directory structure
* exclude articles if a string in the 'exclude' list is included in the title
* save a full featured RSS file

### ... and what it doesn't
* Managing when it scrapes (use crontab or sth else for that)
* Managing when it scrapes (but install instructions for crontab are included)
* serving the feeds and assets via HTTPS (use your favorite web server for that)
* Dealing with article comments
* Archiving feeds (But content and assets - but without meta data)
* Using some sort of database (the file structure is everything)
* Cleaning up old assets
* Automaticly updating the basedir if it changed.
* Automatically updating the basedir if it changed.

### Ugly stuff?
* the html files (feed content) get stored along the assets, even if they don't
need to be exploited via HTTPS.
* almost no exception handling yet.

### How to use
* git clone this project and enter directory
@@ -60,5 +62,6 @@ news pages
`base_url/destination` (e.g. `https://yourdomain.tld/some-url/newspaper.xml`)

### TODOs
* Handle exceptions
* Decide what should happen with old news articles and assets which are not
listed in the current feed anymore.

Notiek ielāde…
Atcelt
Saglabāt