Nie możesz wybrać więcej, niż 25 tematów Tematy muszą się zaczynać od litery lub cyfry, mogą zawierać myślniki ('-') i mogą mieć do 35 znaków.

README.md 3.1 KiB

5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
5 lat temu
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
  1. # Feedcake
  2. ## „Gib mir ein Stück Kuchen und ich will den ganzen cake.“
  3. ### Attention
  4. This script is maintained by only one person who is also a python newbie.
  5. If you don't care about having article images, you should have definitely use
  6. [PyFeeds](https://github.com/PyFeeds/PyFeeds) instead!
  7. Also, it's only working for a very limited subset of news sites.
  8. ### The Problem
  9. Most news platforms don't give you the full article via rss/atom.
  10. This wouldn't be a big problem. But some of them do crazy 1984-ish stuff on their
  11. websites or they have built up paywalls for visitors using privacy addons.
  12. ### Goal of this script
  13. Getting a full-featured news feed (full articles with images) from various
  14. news pages
  15. ### Benefits for the user
  16. * read full articles directly in your feed reader
  17. * exclude articles by keyword in title
  18. * no tracking
  19. * no ads
  20. ### Possible downsides for the user
  21. * articles don't get updated once they are scraped
  22. * articles arrive with some delay
  23. * interactive/special elements in articles may not work
  24. ### What it does
  25. * Fetching the news feed from the original website
  26. * scrape contents of new entries and save them into a directory structure
  27. * exclude articles if a string in the 'exclude' list is included in the title
  28. * save a full featured RSS file
  29. ### ... and what it doesn't
  30. * Managing when it scrapes (but install instructions for crontab are included)
  31. * serving the feeds and assets via HTTPS (use your favorite web server for that)
  32. * Dealing with article comments
  33. * Archiving feeds (But content and assets - but without meta data)
  34. * Using some sort of database (the file structure is everything)
  35. * Cleaning up old assets
  36. * Automatically updating the basedir if it changed.
  37. ### Ugly stuff?
  38. * the html files (feed content) get stored along the assets, even if they don't
  39. need to be exploited via HTTPS.
  40. * almost no exception handling yet.
  41. ### How to use
  42. * git clone this project and enter directory
  43. * install python3, pip and virtualenv
  44. * Create virtualenv: `virtualenv -p python3 ~/.virtualenvs/feedcake`
  45. * Activate your new virtualenv: `source ~/.virtualenvs/feedcake/bin/activate`
  46. * switch into the projects directory: `cd feedcake`
  47. * Install requirements: `pip3 install -r requirements.txt`
  48. * copy the config-example: `cp config-example.json config.json`.
  49. * edit `config.json`
  50. * copy the cron-example: `cp cron-example.sh cron.sh`.
  51. * edit `cron.sh`
  52. * make `cron.sh` executable: `chmod +x cron.sh`
  53. * add cronjob for `cron.sh`: `crontab -e`
  54. * `*/5 * * * * /absolute/path/to/cron.sh >> /path/to/logfile 2>&1`
  55. * setup your webserver:
  56. * let your webserver somehow point to the `public/feeds` directory.
  57. You should protect the http path with a basic authentication.
  58. * let the `assets_url` you specified in the config earlier point to the
  59. `public/assets` directory.
  60. * After running the script the first time, your desired feed is available at
  61. `base_url/destination` (e.g. `https://yourdomain.tld/some-url/newspaper.xml`)
  62. ### TODOs
  63. * Handle exceptions
  64. * Decide what should happen with old news articles and assets which are not
  65. listed in the current feed anymore.