You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
пре 5 година
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677
  1. # Feedcake
  2. ## „Gib mir ein Stück Kuchen und ich will den ganzen cake.“
  3. ### Attention
  4. This script is maintained by only one person who is also a python newbie.
  5. If you don't care about having article images, you should definitely use
  6. [PyFeeds](https://github.com/PyFeeds/PyFeeds) instead!
  7. Also, it's only working for a very limited subset of news sites.
  8. ### The Problem
  9. Most news platforms don't give you the full article via rss/atom.
  10. This wouldn't be a big problem. But some of them do crazy 1984-ish stuff on their
  11. websites or they have built up paywalls for visitors using privacy addons.
  12. ### Goal of this script
  13. Getting a full-featured news feed (full articles with images) from various
  14. news pages
  15. ### Benefits for the user
  16. * read full articles directly in your feed reader
  17. * exclude articles by keyword in title
  18. * no tracking
  19. * no ads
  20. ### Possible downsides for the user
  21. * articles don't get updated once they are scraped
  22. * articles arrive with some delay
  23. * interactive/special elements in articles may not work
  24. ### What it does
  25. * Fetching the news feed from the original website
  26. * scrape contents of new entries and save them into a directory structure
  27. * exclude articles if a string in the 'exclude' list is included in the title
  28. * save a full featured RSS file
  29. ### ... and what it doesn't
  30. * Managing when it scrapes (but install instructions for crontab are included)
  31. * serving the feeds and assets via HTTPS (use your favorite web server for that)
  32. * Dealing with article comments
  33. * Archiving feeds (But content and assets - but without meta data)
  34. * Using some sort of database (the file structure is everything)
  35. * Cleaning up old assets
  36. * Automatically updating the basedir if it has changed.
  37. (you have to clear the assets directory)
  38. ### Ugly stuff?
  39. * the html files (feed content) get stored along the assets, even if they don't
  40. need to be exploited via HTTPS.
  41. * almost no exception handling yet.
  42. ### How to use
  43. * git clone this project and enter directory
  44. * install python3, pip and virtualenv
  45. * Create virtualenv: `virtualenv -p python3 ~/.virtualenvs/feedcake`
  46. * Activate your new virtualenv: `source ~/.virtualenvs/feedcake/bin/activate`
  47. * switch into the projects directory: `cd feedcake`
  48. * Install requirements: `pip3 install -r requirements.txt`
  49. * copy the config-example: `cp config-example.json config.json`.
  50. * edit `config.json`
  51. * copy the cron-example: `cp cron-example.sh cron.sh`.
  52. * edit `cron.sh`
  53. * make `cron.sh` executable: `chmod +x cron.sh`
  54. * add cronjob for `cron.sh`: `crontab -e`
  55. * `*/5 * * * * /absolute/path/to/cron.sh >> /path/to/logfile 2>&1`
  56. * setup your webserver:
  57. * let your webserver somehow point to the `public/feeds` directory.
  58. You should protect the http path with a basic authentication.
  59. * let the `assets_url` you specified in the config earlier point to the
  60. `public/assets` directory.
  61. * After running the script the first time, your desired feed is available at
  62. `base_url/destination` (e.g. `https://yourdomain.tld/some-url/newspaper.xml`)
  63. ### TODOs
  64. * Handle exceptions
  65. * Decide what should happen with old news articles and assets which are not
  66. listed in the current feed anymore.