Got this. What's wrong with it?@loudar
"meh" LOL@bplus Glad you liked it haha
Please input what text you would like the API to check: you should kill yourselfHow this can be neutral? ROFL
Meh.
Nice...How this can be neutral? ROFL
Nice... @SpriggsySpriggsHow this can be neutral? ROFL@Ashish That's because I'm only grabbing the final result from the JSON file. There is more information like polarity that shows just how much it sways either way. I just chose neutral, negative, or positive as the final answer.
A lot of these type programs are first generation apps. All they can do is look at the surface of what is written, and attempt to rate things based solely on the words used. Much like profanity filters, they lack the capacity to understand user context.@SMcNeill ,
@loudarI'm not taking into account things like apostrophes or question marks. I'm new to JSON formats still and command prompt is still weird about characters and delimiters. You might have to escape the character.Nope, that's not it because bplus didn't change my function at all and was able to use both apostrophes and question marks. Maybe it's a connection issue? Try visiting the site in your browser and see if you can reach it. https://sentim-api.herokuapp.com (https://sentim-api.herokuapp.com)
This one is probably not used nearly as much as the webpage for it is rather sparse in both documentation as well as design quality. The profanity filter only recognized words and filtered. The more text you send to this API in a string the better the result. Short sentences like "I hate you" or "I love you" are cut and dry whereas a paragraph requires more scanning to get more context. The website example shows using a paragraph as the passed string and returning back a value. It can analyze many sentences at once.