-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Announcement - Preparing for v3.0.0 (Breaking changes) #1247
Comments
I'm very much afraid that you might have done some of the wrong design decision skipping some design point when transforming the dataset into nosql.
Assumption
If enough time and resources are available, those require data modeling by UML to validate. Are these make sense? I'm currently using current API to get full 5K lines full dataset into google sheet with importjson about once in a day. |
Here's a data entry attribute I've been waiting for. somethingbad.tumblr.com ==> sub bad, root good I need an additional distinctive attribute for the above two. Also how can I identify the scope of black below? |
First of all, your opinion on this matter is highly appreciated. We will not be pushing these changes to production until we are sure they work out for everyone involved on the project. To address your concerns about URI / URL classification, I've been thinking about this as well. I propose we work towards the following scam entry structure:
These changes will also be reflected in the UI While I understand compatibility is important, I think some entries still showing the We will indeed also be providing more integration data (like VirusTotal) through the API in the future. Please let me know what you think 😄 |
Good! a) Scam/phishing evidence As evidence I want some snapshot record, like URLscan or phishcheck showing only ttt.ppp does not sastify the needs. We may either having addtional link to the evidence, or specify that in URI like latter example I'm now locally keep them or search above. For b) I can think of 3 types of templates at least
sss.ttt.ppp They are rotating URI/contents, and try to escape the filter with staging deployment. initial: later: In that case, when registering sss.ttt.ppp, we must mark if ttt.ppp is good or bad, to distinguish those from tumbler/blogger for example.
Google doc/telegram/dropbox etc Those are probably fundamental requirements from perspective view to the dataset, desirably comply at any point of expansion, but faster is better to avoid big modification or rewrite. For UI, ESDB is a kind of professional tool on purpose, and I recommend you not to stick on simplicity and entertainment factor that some of outside may aware. |
I will update my PR next week using your feedback, thanks for being involved 😄 |
https://medium.com/@etherscamdb/breaking-api-changes-in-v3-646217a22bac
The text was updated successfully, but these errors were encountered: