Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice!

What I missed from the writeup were some specific cases and how did you test that all this orchestration delivers worthwhile data (actionable and full/correct).

E.g. you have a screenshot of the AI supply chain - more of these would be useful, and also some info about how you tested that this supply chain agrees with reality.

Unless the goal of the project was to just play with agent architecture - then congrats :)

 help



Great advice!

For demo purpose and to attract attention, i was primarily picking some cases with cool visuals (like the screenshot of the AI supply chain you mentioned). we have some internal eval and will try to add more cases in the public repo for reference.


More signs of the AI bubble. Completely unprofessional behavior ("cool visuals" not "real results"). And don't give me that "hacker culture" bullshit, these people are targeting Wall Street as paying customers.

would it be more professional in your opinion if i am claiming i make $xxxxx via this tool? I thought i have clearly stated that cool visuals is for >demo purpose and to attract attention. I do not want to post any dramatic statement to trick people using it. This is an early stage open source project to help investors and traders organize their thoughts, not an auto money making machine that guarantee profit. its the mind who use the tool decide if they will profit from market.

>And don't give me that "hacker culture" bullshit

I couldn’t help but be genuinely curious: if you believe AI is a bubble and aren’t a fan of hacker culture, then why are you here on Hacker News?

great to hear your input anyway!


First of all this project is great and finance is ready for a disruption like this. I'm sure a lot of good research and development went into this.

Quality research indeed doesn't always make money, so I agree that it doesn't make sense to present these type of metrics. But at the same type, it will be hard to trust this sort of thing immediately without having a way to validate its output. At the very least I would like to know that the financial metrics it calculates (esp those based on 20/30 data points) are correct. Looks like there is some transparency build in and that's a good thing.

But people that are not a pro in investment research wouldn't know that it messed up a certain metric and therefore the output is different from what it tells me. Or maybe it is not messing up entirely, but a certain sector-specific detail doesn't get picked up making a signal less strong than the output made you believe. Maybe you already have it but if not maybe you could get some sort of validation layer added, that could also serve as some sort of customisable calculation engine, I'd use it right away.


Thanks, very valid point. We are building towards a benchmark as well. hope we can share more quantitive metric soon.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: