CrewAI Flows Crash Course

CrewAI Flows Crash Course

aiwithbrandon

7 месяцев назад

27,259 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@mikesara7032
@mikesara7032 - 25.04.2024 22:56

your awesome, thank you!

Ответить
@GregPeters1
@GregPeters1 - 25.04.2024 23:32

Hey Brandon, welcome back after your vacay!

Ответить
@thefutureisbright
@thefutureisbright - 25.04.2024 23:43

Brandon excellent tutorial 👍

Ответить
@protovici1476
@protovici1476 - 26.04.2024 04:10

Excellent video! Would be interesting to see these frameworks, but within LightningAI Studios. Also, I saw CrewAI will be having a more golden standard approach to their code structuring in the near future.

Ответить
@pratyushsrivastava6646
@pratyushsrivastava6646 - 26.04.2024 04:45

Hello sir
Nice content

Ответить
@shuntera
@shuntera - 26.04.2024 06:06

With both the Groq 8b and 70b with crew max_rpm set at both 1 or 2 I do get it halting for a while with:

[INFO]: Max RPM reached, waiting for next minute to start.

Ответить
@nathankasa6220
@nathankasa6220 - 26.04.2024 06:18

Thanks! Is Claude 3 opus still not supported though? How come?

Ответить
@MariodeFelipe
@MariodeFelipe - 26.04.2024 07:06

The quality is 10/10 thanks mate

Ответить
@deadbody408
@deadbody408 - 26.04.2024 07:31

might want to revoke those keys you revealed if you haven't

Ответить
@rauljauregi6615
@rauljauregi6615 - 26.04.2024 07:53

Nice! 😃

Ответить
@CodeSnap01
@CodeSnap01 - 26.04.2024 07:57

refereshed after short vacation.. hope to see you frequently

Ответить
@magnuscarlsson5067
@magnuscarlsson5067 - 26.04.2024 16:11

What graphic card do you use on your computer when running local with Ollama?

Ответить
@Ryan.Youtube
@Ryan.Youtube - 26.04.2024 23:57

This is awesome! 😎

Ответить
@clinton2312
@clinton2312 - 27.04.2024 11:34

Thank you :)

Ответить
@jarad4621
@jarad4621 - 27.04.2024 20:44

Hi Brandon, the groq rate limit is a big issue for my use case, can i use this same method to use another similar hosted llama 3 70b with crewai like openrouter api or can any api be used instead of groq with your method?

Ответить
@reidelliot1972
@reidelliot1972 - 27.04.2024 21:51

Great content as always! Do you know if it's sustainable to use a single groqcloud API key to host LLM access for a multi-user app? Or would a service like AWS Sagemaker be better for simultaneous users?

Cheers!

Ответить
@shuntera
@shuntera - 28.04.2024 00:29

That is using a very old version of CrewAI - if you run it with the current version of CrewAI it fails because of lack of expected_output parameter in the Tasks

Ответить
@theBookofIsaiah33ad
@theBookofIsaiah33ad - 28.04.2024 04:58

Man, I do not know how to create and write code but you have made a video and I think I can do this! Bless you my friend!

Ответить
@miaohf
@miaohf - 28.04.2024 08:24

Very good video demonstration. I noticed that you chose to use serper search in the video. I would like to know the difference between serper and duckduckgo search and how to choose between them. If you know, please introduce it to me. Thank you.

Ответить
@RobBominaar
@RobBominaar - 28.04.2024 23:24

My God, what stupid examples can you produce...

Ответить
@bennie_pie
@bennie_pie - 29.04.2024 10:31

Thank you for this and for the code.. How does Llama 3 compare to Dolphin-Mistral 2.8 running locally as the more junior agents do you know? Dolphin-Mistral with its extra conversatuon/coding training and bigger 32k context window appeals! Ive had agents go round in circles creating nonsense with other frameworks as they dont remember what they are supposed to do! A big context window defo could bring some benefits! I try and avoid using GPT3.5 or 4 for coding preferring for this reason. Id then like to use Claude 3 Opus with his 200k context window and extra capability for the heavy liftin and oversight!

Ответить
@d.d.z.
@d.d.z. - 29.04.2024 19:10

Friendly commment: You look better with glasses, more professional. Great content.

Ответить
@jalapenos12
@jalapenos12 - 30.04.2024 02:50

Just curious why VSCode doesn't display file types on Mac. I'm going bonkers trying to figure out what to save the Modelfile as.

Ответить
@ag36015
@ag36015 - 30.04.2024 16:12

What would you say are the minimum hardware requirements to make it run smoothly?

Ответить
@jarad4621
@jarad4621 - 01.05.2024 01:02

Please for the love of god somebody explain to me why we are using Ollama to download local models and then using Groq anyway to run the model in the cloud. Why can't we just skip the ollama part? I beg you i see all the videos using Ollama with Groq and i don't understand the aspect! thank you. Does ollama do something special to make it work better for crewai then a direct Groq connect?

Ответить
@Omobilo
@Omobilo - 01.05.2024 04:04

Great stuff. Maybe a silly question, but when it was fetching to read data from remote website (the analysis part), does it read it remotely OR does it capture screenshots & download text to feed into its prompt and then clear this cached data or such local cached data needs to be cleaned eventually? Hope it simply reads remotely without too much data saved locally as I plan to use this approach to analyze many websites without flooding my local storage.

Ответить
@tapos999
@tapos999 - 01.05.2024 21:57

thanks! Your crewai tutorial are top-of-the-shelf stuff. do you have any crewai proejct with streamlit connected to show output on the ui? thanks

Ответить
@kepenge
@kepenge - 03.05.2024 16:58

Appreciate your support (with those contents), the only drawback, was the need to subscribe to get access to a project that isn't yours. 😞

Ответить
@Storytelling-by-ash
@Storytelling-by-ash - 05.05.2024 21:12

I get a error, then I noticed that we need search api, I added that but still get the error
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value={'description': "Analyze ...e business landscapes.)}, input_type=dict]

Ответить
@ZombieGamerPlays
@ZombieGamerPlays - 07.05.2024 19:20

gyus do u know any way to run crewai and\or llama on gpu? only CPU is soooooooooooooooooooooooo sloooooooooooooow

Ответить
@ryana2952
@ryana2952 - 12.05.2024 06:18

Is there an easy way to build No Code AI Assistants or Agents with Groq? I know zero code

Ответить
@madhudson1
@madhudson1 - 21.05.2024 08:50

Good luck getting local, quantized models to reliably function call, or use any kind of 'tool'. They need so much more supervision, which is where frameworks like langgraph can help, rather than crew

Ответить
@togai-dev
@togai-dev - 26.05.2024 21:22

Hey Brandon great video by the way. There seems to be an error as such.
It seems we encountered an unexpected error while trying to use the tool. This was the error: Invalid json output: Based on the provided text, a valid output schema for the tool is:

{
"tool_name": str,
"arguments": {
"query": str
}
}

This schema defines two keys: `tool_name` which should be a string, and `arguments` which should be a dictionary containing one key-value pair. The key in this case is `query`, with the value being another string.
'str' object has no attribute 'tool_name'

Ответить
@darkyz543
@darkyz543 - 09.06.2024 15:20

Your channel is THE real gold mine. Thank you so much.

Ответить
@Imakemvps
@Imakemvps - 22.06.2024 02:14

I hope we can get access to your skool soon! Its been a few days. So I can learn from your group.

Ответить
@AnjuMohan-d8c
@AnjuMohan-d8c - 03.07.2024 07:43

Can someone help me, I got the following when I ran llama3 in Ollama.
Created a chunk of size 1414, which is longer than the specified 1000
Created a chunk of size 1089, which is longer than the specified 1000
Created a chunk of size 1236, which is longer than the specified 1000

Ответить
@tusharparakh6908
@tusharparakh6908 - 18.07.2024 23:18

Can I use this and deploy it live so that other people can use it? Does it run for free only locally or its free when its deployed also?

Ответить
@aboali-pl7ib
@aboali-pl7ib - 02.08.2024 16:37

thank you for your help 🤘🤘

Ответить
@MichaelDavison-mv8dr
@MichaelDavison-mv8dr - 27.08.2024 05:13

cant get the tools.search_tools module to run says not foun, ive trued pip install command or just install command with tools name, no luck any ideas please

Ответить
@QiuyiFeng-t2j
@QiuyiFeng-t2j - 01.09.2024 00:41

Max RPM reached, waiting for next minute to start. How to solve it...

Ответить
@geneanthony3421
@geneanthony3421 - 12.09.2024 16:54

I ran the code on Ollama and on Groq and I'm getting a loop "It seems we encountered an unexpected error while trying to use the tool. This was the error 'organic'" [Info]: Max RPM reached, waiting for next minute to start

Ответить
@dataninjuh2135
@dataninjuh2135 - 15.10.2024 04:33

This man knows what the people want , getting up and running with LLMs and Agents for the F R E E 😮‍💨 !
“Now this is pod racing !” 😂🙏🏻👍

Ответить