After deploying a Callable, Cortex provides the direct curl request to send inputs to.
curl -L https://trycortex.ai/api/sdk/p/[pID]/a/[aID]/r \
-H "Authorization: Bearer $CORTEX_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"version": 1,
"config": {
"OUTPUT_STREAM":{
"provider_id":"openai",
"model_id":"gpt-3.5-turbo-16k",
"use_cache":true,
"use_semantic_cache":false},
"RETRIEVALS":{
"knowledge":[{
"project_id":"$pID",
"data_source_id":"$DATASOURCE"}],
"top_k":10,
"filter":{"tags":null,"timestamp":null},
"use_cache":false}},
"blocking": true,
"inputs": [{ "Key": "Value" }]
}'
By providing a specified key and value pairs into the input, the HTTP response will contain a json with the corresponding output from the callable.
curl -L https://trycortex.ai/api/sdk/p/[pID]/knowledge/[knowledgeName]/d/[docID] \
-H "Authorization: Bearer $CORTEX_API_KEY" \
-H "Content-Type: application/json" \
-d '{
βtextβ: βvalueβ
}'