diff --git a/docs/docs/integrations/providers/sap.mdx b/docs/docs/integrations/providers/sap.mdx index 97cf2649b6e9c..3c37dc74a3ab7 100644 --- a/docs/docs/integrations/providers/sap.mdx +++ b/docs/docs/integrations/providers/sap.mdx @@ -7,10 +7,10 @@ ## Installation and Setup -We need to install the `hdbcli` python package. +We need to install the `langchain-hana` python package. ```bash -pip install hdbcli +pip install langchain-hana ``` ## Vectorstore @@ -21,5 +21,5 @@ pip install hdbcli See a [usage example](/docs/integrations/vectorstores/sap_hanavector). ```python -from langchain_community.vectorstores.hanavector import HanaDB +from langchain_hana import HanaDB ``` diff --git a/docs/docs/integrations/vectorstores/sap_hanavector.ipynb b/docs/docs/integrations/vectorstores/sap_hanavector.ipynb index ded2118134bc2..801a24a6b5f78 100644 --- a/docs/docs/integrations/vectorstores/sap_hanavector.ipynb +++ b/docs/docs/integrations/vectorstores/sap_hanavector.ipynb @@ -6,18 +6,16 @@ "source": [ "# SAP HANA Cloud Vector Engine\n", "\n", - ">[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is a vector store fully integrated into the `SAP HANA Cloud` database.\n", - "\n", - "You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration" + ">[SAP HANA Cloud Vector Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide) is a vector store fully integrated into the `SAP HANA Cloud` database." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Setting up\n", + "## Setup\n", "\n", - "Installation of the HANA database driver." + "Install the `langchain-hana` external integration package, as well as the other packages used throughout this notebook." ] }, { @@ -26,65 +24,134 @@ "metadata": { "tags": [] }, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], "source": [ - "# Pip install necessary package\n", - "%pip install --upgrade --quiet hdbcli" + "%pip install -qU langchain-hana" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "For `OpenAIEmbeddings` we use the OpenAI API key from the environment." + "### Credentials\n", + "\n", + "Ensure your SAP HANA instance is running. Load your credentials from environment variables and create a connection:" ] }, { "cell_type": "code", - "execution_count": 1, - "metadata": { - "ExecuteTime": { - "end_time": "2023-09-09T08:02:16.802456Z", - "start_time": "2023-09-09T08:02:07.065604Z" - } - }, + "execution_count": 2, + "metadata": {}, "outputs": [], "source": [ "import os\n", - "# Use OPENAI_API_KEY env variable\n", - "# os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\"" + "\n", + "from dotenv import load_dotenv\n", + "from hdbcli import dbapi\n", + "\n", + "load_dotenv()\n", + "# Use connection settings from the environment\n", + "connection = dbapi.connect(\n", + " address=os.environ.get(\"HANA_DB_ADDRESS\"),\n", + " port=os.environ.get(\"HANA_DB_PORT\"),\n", + " user=os.environ.get(\"HANA_DB_USER\"),\n", + " password=os.environ.get(\"HANA_DB_PASSWORD\"),\n", + " autocommit=True,\n", + " sslValidateCertificate=False,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Learn more about SAP HANA in [What is SAP HANA?](https://www.sap.com/products/data-cloud/hana/what-is-sap-hana.html)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Initialization\n", + "To initialize a `HanaDB` vector store, you need a database connection and an embedding instance. SAP HANA Cloud Vector Engine supports both external and internal embeddings." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Create a database connection to a HANA Cloud instance." + "- #### Using External Embeddings\n", + "\n", + "import EmbeddingTabs from \"@theme/EmbeddingTabs\";\n", + "\n", + "" ] }, { "cell_type": "code", - "execution_count": 9, - "metadata": { - "ExecuteTime": { - "end_time": "2023-09-09T08:02:28.174088Z", - "start_time": "2023-09-09T08:02:28.162698Z" - } - }, + "execution_count": 3, + "metadata": {}, "outputs": [], "source": [ - "from dotenv import load_dotenv\n", - "from hdbcli import dbapi\n", + "# | output: false\n", + "# | echo: false\n", + "from langchain_openai import OpenAIEmbeddings\n", "\n", - "load_dotenv()\n", - "# Use connection settings from the environment\n", - "connection = dbapi.connect(\n", - " address=os.environ.get(\"HANA_DB_ADDRESS\"),\n", - " port=os.environ.get(\"HANA_DB_PORT\"),\n", - " user=os.environ.get(\"HANA_DB_USER\"),\n", - " password=os.environ.get(\"HANA_DB_PASSWORD\"),\n", - " autocommit=True,\n", - " sslValidateCertificate=False,\n", + "embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- #### Using Internal Embeddings" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Alternatively, you can compute embeddings directly in SAP HANA using its native `VECTOR_EMBEDDING()` function. To enable this, create an instance of `HanaInternalEmbeddings` with your internal model ID and pass it to `HanaDB`. Note that the `HanaInternalEmbeddings` instance is specifically designed for use with `HanaDB` and is not intended for use with other vector store implementations. For more information about internal embedding, see the [SAP HANA VECTOR_EMBEDDING Function](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector).\n", + "\n", + "> **Caution:** Ensure NLP is enabled in your SAP HANA Cloud instance." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_hana import HanaInternalEmbeddings\n", + "\n", + "embeddings = HanaInternalEmbeddings(internal_embedding_model_id=\"SAP_NEB.20240715\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you have your connection and embedding instance, create the vector store by passing them to `HanaDB` along with a table name for storing vectors:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_hana import HanaDB\n", + "\n", + "db = HanaDB(\n", + " embedding=embeddings, connection=connection, table_name=\"STATE_OF_THE_UNION\"\n", ")" ] }, @@ -104,7 +171,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 6, "metadata": { "ExecuteTime": { "end_time": "2023-09-09T08:02:25.452472Z", @@ -122,40 +189,16 @@ ], "source": [ "from langchain_community.document_loaders import TextLoader\n", - "from langchain_community.vectorstores.hanavector import HanaDB\n", "from langchain_core.documents import Document\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", - "text_documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n", + "text_documents = TextLoader(\n", + " \"../../how_to/state_of_the_union.txt\", encoding=\"UTF-8\"\n", + ").load()\n", "text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n", "text_chunks = text_splitter.split_documents(text_documents)\n", - "print(f\"Number of document chunks: {len(text_chunks)}\")\n", - "\n", - "embeddings = OpenAIEmbeddings()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Create a LangChain VectorStore interface for the HANA database and specify the table (collection) to use for accessing the vector embeddings" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "ExecuteTime": { - "end_time": "2023-09-09T08:04:16.696625Z", - "start_time": "2023-09-09T08:02:31.817790Z" - } - }, - "outputs": [], - "source": [ - "db = HanaDB(\n", - " embedding=embeddings, connection=connection, table_name=\"STATE_OF_THE_UNION\"\n", - ")" + "print(f\"Number of document chunks: {len(text_chunks)}\")" ] }, { @@ -167,7 +210,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -176,7 +219,7 @@ "[]" ] }, - "execution_count": 12, + "execution_count": 7, "metadata": {}, "output_type": "execute_result" } @@ -199,7 +242,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 8, "metadata": {}, "outputs": [ { @@ -235,7 +278,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 9, "metadata": {}, "outputs": [ { @@ -254,7 +297,7 @@ } ], "source": [ - "from langchain_community.vectorstores.utils import DistanceStrategy\n", + "from langchain_hana.utils import DistanceStrategy\n", "\n", "db = HanaDB(\n", " embedding=embeddings,\n", @@ -286,7 +329,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 10, "metadata": { "ExecuteTime": { "end_time": "2023-09-09T08:05:23.276819Z", @@ -336,7 +379,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 11, "metadata": {}, "outputs": [ { @@ -411,7 +454,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 12, "metadata": {}, "outputs": [ { @@ -420,7 +463,7 @@ "True" ] }, - "execution_count": 19, + "execution_count": 12, "metadata": {}, "output_type": "execute_result" } @@ -443,7 +486,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 13, "metadata": {}, "outputs": [ { @@ -452,7 +495,7 @@ "[]" ] }, - "execution_count": 20, + "execution_count": 13, "metadata": {}, "output_type": "execute_result" } @@ -471,7 +514,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 14, "metadata": {}, "outputs": [ { @@ -480,7 +523,7 @@ "[]" ] }, - "execution_count": 21, + "execution_count": 14, "metadata": {}, "output_type": "execute_result" } @@ -508,7 +551,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 15, "metadata": {}, "outputs": [ { @@ -539,7 +582,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 16, "metadata": {}, "outputs": [ { @@ -578,13 +621,14 @@ "| `$nin` | Not contained in a set of given values (not in) |\n", "| `$between` | Between the range of two boundary values |\n", "| `$like` | Text equality based on the \"LIKE\" semantics in SQL (using \"%\" as wildcard) |\n", + "| `$contains` | Filters documents containing a specific keyword |\n", "| `$and` | Logical \"and\", supporting 2 or more operands |\n", "| `$or` | Logical \"or\", supporting 2 or more operands |" ] }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 17, "metadata": {}, "outputs": [], "source": [ @@ -592,15 +636,15 @@ "docs = [\n", " Document(\n", " page_content=\"First\",\n", - " metadata={\"name\": \"adam\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n", + " metadata={\"name\": \"Adam Smith\", \"is_active\": True, \"id\": 1, \"height\": 10.0},\n", " ),\n", " Document(\n", " page_content=\"Second\",\n", - " metadata={\"name\": \"bob\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n", + " metadata={\"name\": \"Bob Johnson\", \"is_active\": False, \"id\": 2, \"height\": 5.7},\n", " ),\n", " Document(\n", " page_content=\"Third\",\n", - " metadata={\"name\": \"jane\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n", + " metadata={\"name\": \"Jane Doe\", \"is_active\": True, \"id\": 3, \"height\": 2.4},\n", " ),\n", "]\n", "\n", @@ -632,7 +676,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 18, "metadata": {}, "outputs": [ { @@ -640,19 +684,19 @@ "output_type": "stream", "text": [ "Filter: {'id': {'$ne': 1}}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", "Filter: {'id': {'$gt': 1}}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", "Filter: {'id': {'$gte': 1}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", "Filter: {'id': {'$lt': 1}}\n", "\n", "Filter: {'id': {'$lte': 1}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n" + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n" ] } ], @@ -687,7 +731,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 19, "metadata": {}, "outputs": [ { @@ -695,13 +739,13 @@ "output_type": "stream", "text": [ "Filter: {'id': {'$between': (1, 2)}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "Filter: {'name': {'$in': ['adam', 'bob']}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "Filter: {'name': {'$nin': ['adam', 'bob']}}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n" + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", + "Filter: {'name': {'$in': ['Adam Smith', 'Bob Johnson']}}\n", + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", + "Filter: {'name': {'$nin': ['Adam Smith', 'Bob Johnson']}}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n" ] } ], @@ -710,11 +754,11 @@ "print(f\"Filter: {advanced_filter}\")\n", "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", "\n", - "advanced_filter = {\"name\": {\"$in\": [\"adam\", \"bob\"]}}\n", + "advanced_filter = {\"name\": {\"$in\": [\"Adam Smith\", \"Bob Johnson\"]}}\n", "print(f\"Filter: {advanced_filter}\")\n", "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", "\n", - "advanced_filter = {\"name\": {\"$nin\": [\"adam\", \"bob\"]}}\n", + "advanced_filter = {\"name\": {\"$nin\": [\"Adam Smith\", \"Bob Johnson\"]}}\n", "print(f\"Filter: {advanced_filter}\")\n", "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))" ] @@ -728,7 +772,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 20, "metadata": {}, "outputs": [ { @@ -736,10 +780,10 @@ "output_type": "stream", "text": [ "Filter: {'name': {'$like': 'a%'}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "\n", "Filter: {'name': {'$like': '%a%'}}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n" + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n" ] } ], @@ -753,6 +797,51 @@ "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Text filtering with `$contains`" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Filter: {'name': {'$contains': 'bob'}}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", + "Filter: {'name': {'$contains': 'bo'}}\n", + "\n", + "Filter: {'name': {'$contains': 'Adam Johnson'}}\n", + "\n", + "Filter: {'name': {'$contains': 'Adam Smith'}}\n", + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n" + ] + } + ], + "source": [ + "advanced_filter = {\"name\": {\"$contains\": \"bob\"}}\n", + "print(f\"Filter: {advanced_filter}\")\n", + "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", + "\n", + "advanced_filter = {\"name\": {\"$contains\": \"bo\"}}\n", + "print(f\"Filter: {advanced_filter}\")\n", + "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", + "\n", + "advanced_filter = {\"name\": {\"$contains\": \"Adam Johnson\"}}\n", + "print(f\"Filter: {advanced_filter}\")\n", + "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", + "\n", + "advanced_filter = {\"name\": {\"$contains\": \"Adam Smith\"}}\n", + "print(f\"Filter: {advanced_filter}\")\n", + "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -762,7 +851,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 22, "metadata": {}, "outputs": [ { @@ -770,14 +859,15 @@ "output_type": "stream", "text": [ "Filter: {'$or': [{'id': 1}, {'name': 'bob'}]}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", "Filter: {'$and': [{'id': 1}, {'id': 2}]}\n", "\n", "Filter: {'$or': [{'id': 1}, {'id': 2}, {'id': 3}]}\n", - "{'name': 'adam', 'is_active': True, 'id': 1, 'height': 10.0}\n", - "{'name': 'bob', 'is_active': False, 'id': 2, 'height': 5.7}\n", - "{'name': 'jane', 'is_active': True, 'id': 3, 'height': 2.4}\n" + "{'name': 'Adam Smith', 'is_active': True, 'id': 1, 'height': 10.0}\n", + "{'name': 'Jane Doe', 'is_active': True, 'id': 3, 'height': 2.4}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n", + "Filter: {'$and': [{'name': {'$contains': 'bob'}}, {'name': {'$contains': 'johnson'}}]}\n", + "{'name': 'Bob Johnson', 'is_active': False, 'id': 2, 'height': 5.7}\n" ] } ], @@ -792,6 +882,12 @@ "\n", "advanced_filter = {\"$or\": [{\"id\": 1}, {\"id\": 2}, {\"id\": 3}]}\n", "print(f\"Filter: {advanced_filter}\")\n", + "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))\n", + "\n", + "advanced_filter = {\n", + " \"$and\": [{\"name\": {\"$contains\": \"bob\"}}, {\"name\": {\"$contains\": \"johnson\"}}]\n", + "}\n", + "print(f\"Filter: {advanced_filter}\")\n", "print_filter_result(db.similarity_search(\"just testing\", k=5, filter=advanced_filter))" ] }, @@ -804,13 +900,10 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 23, "metadata": {}, "outputs": [], "source": [ - "from langchain.memory import ConversationBufferMemory\n", - "from langchain_openai import ChatOpenAI\n", - "\n", "# Access the vector DB with a new table\n", "db = HanaDB(\n", " connection=connection,\n", @@ -837,7 +930,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ @@ -874,6 +967,8 @@ "outputs": [], "source": [ "from langchain.chains import ConversationalRetrievalChain\n", + "from langchain.memory import ConversationBufferMemory\n", + "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n", "memory = ConversationBufferMemory(\n", @@ -898,7 +993,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 26, "metadata": {}, "outputs": [ { @@ -907,7 +1002,7 @@ "text": [ "Answer from LLM:\n", "================\n", - "The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers. This collaboration is part of the efforts to address immigration issues and secure the borders in the region.\n", + "The United States has set up joint patrols with Mexico and Guatemala to catch more human traffickers at the border. This collaborative effort aims to improve border security and combat illegal activities such as human trafficking.\n", "================\n", "Number of used source document chunks: 5\n" ] @@ -954,7 +1049,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 28, "metadata": {}, "outputs": [ { @@ -963,12 +1058,12 @@ "text": [ "Answer from LLM:\n", "================\n", - "Mexico and Guatemala are involved in joint patrols to catch human traffickers.\n" + "Countries like Mexico and Guatemala are participating in joint patrols to catch human traffickers. The United States is also working with partners in South and Central America to host more refugees and secure their borders. Additionally, the U.S. is working with twenty-seven members of the European Union, as well as countries like France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and Switzerland.\n" ] } ], "source": [ - "question = \"What about other countries?\"\n", + "question = \"How many casualties were reported after that?\"\n", "\n", "result = qa_chain.invoke({\"question\": question})\n", "print(\"Answer from LLM:\")\n", @@ -996,7 +1091,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 29, "metadata": {}, "outputs": [ { @@ -1005,7 +1100,7 @@ "[]" ] }, - "execution_count": 35, + "execution_count": 29, "metadata": {}, "output_type": "execute_result" } @@ -1038,7 +1133,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 30, "metadata": {}, "outputs": [ { @@ -1101,7 +1196,7 @@ }, { "cell_type": "code", - "execution_count": 39, + "execution_count": 32, "metadata": {}, "outputs": [ { @@ -1111,7 +1206,7 @@ "None\n", "Some other text\n", "{\"start\": 400, \"end\": 450, \"doc_name\": \"other.txt\"}\n", - "\n" + "\n" ] } ], @@ -1168,7 +1263,7 @@ }, { "cell_type": "code", - "execution_count": 40, + "execution_count": 33, "metadata": {}, "outputs": [ { @@ -1176,9 +1271,9 @@ "output_type": "stream", "text": [ "--------------------------------------------------------------------------------\n", - "Some other text\n", + "Some more text\n", "--------------------------------------------------------------------------------\n", - "Some more text\n" + "Some other text\n" ] } ], @@ -1214,7 +1309,7 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": 34, "metadata": {}, "outputs": [ { @@ -1224,7 +1319,7 @@ "Filters on this value are very performant\n", "Some other text\n", "{\"start\": 400, \"end\": 450, \"doc_name\": \"other.txt\", \"CUSTOMTEXT\": \"Filters on this value are very performant\"}\n", - "\n" + "\n" ] } ], @@ -1291,7 +1386,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": 35, "metadata": {}, "outputs": [ { @@ -1299,9 +1394,9 @@ "output_type": "stream", "text": [ "--------------------------------------------------------------------------------\n", - "Some other text\n", + "Some more text\n", "--------------------------------------------------------------------------------\n", - "Some more text\n" + "Some other text\n" ] } ], @@ -1330,9 +1425,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "lc3", "language": "python", - "name": "python3" + "name": "your_env_name" }, "language_info": { "codemirror_mode": { @@ -1344,7 +1439,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.14" + "version": "3.10.16" } }, "nbformat": 4, diff --git a/libs/community/langchain_community/query_constructors/hanavector.py b/libs/community/langchain_community/query_constructors/hanavector.py index f19ef8c278743..79937820736d8 100644 --- a/libs/community/langchain_community/query_constructors/hanavector.py +++ b/libs/community/langchain_community/query_constructors/hanavector.py @@ -1,6 +1,7 @@ # HANA Translator/query constructor from typing import Dict, Tuple, Union +from langchain_core._api import deprecated from langchain_core.structured_query import ( Comparator, Comparison, @@ -11,8 +12,25 @@ ) +@deprecated( + since="0.3.23", + removal="1.0", + message=( + "This class is deprecated and will be removed in a future version. " + "Please use query_constructors.HanaTranslator from the " + "langchain_hana package instead. " + "See https://github.com/SAP/langchain-integration-for-sap-hana-cloud " + "for details." + ), + alternative="from langchain_hana.query_constructors import HanaTranslator;", + pending=False, +) class HanaTranslator(Visitor): """ + **DEPRECATED**: This class is deprecated and will no longer be maintained. + Please use query_constructors.HanaTranslator from the langchain_hana + package instead. It offers an improved implementation and full support. + Translate internal query language elements to valid filters params for HANA vectorstore. """ diff --git a/libs/community/langchain_community/vectorstores/hanavector.py b/libs/community/langchain_community/vectorstores/hanavector.py index 9212c53f5a4ca..9c20656ac491c 100644 --- a/libs/community/langchain_community/vectorstores/hanavector.py +++ b/libs/community/langchain_community/vectorstores/hanavector.py @@ -19,6 +19,7 @@ ) import numpy as np +from langchain_core._api import deprecated from langchain_core.documents import Document from langchain_core.embeddings import Embeddings from langchain_core.runnables.config import run_in_executor @@ -66,9 +67,25 @@ default_vector_column_length: int = -1 # -1 means dynamic length +@deprecated( + since="0.3.23", + removal="1.0", + message=( + "This class is deprecated and will be removed in a future version. " + "Please use HanaDB from the langchain_hana package instead. " + "See https://github.com/SAP/langchain-integration-for-sap-hana-cloud " + "for details." + ), + alternative="from langchain_hana import HanaDB;", + pending=False, +) class HanaDB(VectorStore): """SAP HANA Cloud Vector Engine + **DEPRECATED**: This class is deprecated and will no longer be maintained. + Please use HanaDB from the langchain_hana package instead. It offers an + improved implementation and full support. + The prerequisite for using this class is the installation of the ``hdbcli`` Python package. diff --git a/libs/packages.yml b/libs/packages.yml index 50418c59ef332..3e6a88b0841fb 100644 --- a/libs/packages.yml +++ b/libs/packages.yml @@ -643,3 +643,10 @@ packages: repo: valyu-network/langchain-valyu downloads: 120 downloads_updated_at: '2025-04-22T15:25:24.644345+00:00' +- name: langchain-hana + path: . + repo: SAP/langchain-integration-for-sap-hana-cloud + name_title: SAP HANA + provider_page: sap + downloads: 315 + downloads_updated_at: '2025-04-27T19:45:43.938924+00:00'