diff --git a/README.md b/README.md index da36470..3245711 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [![GitHub release (latest by date)](https://img.shields.io/github/v/release/ITSpecialist111/AI_AUTOMATION_SUGGESTER)](https://github.com/ITSpecialist111/AI_AUTOMATION_SUGGESTER/releases) [![hacs_badge](https://img.shields.io/badge/HACS-Default-41BDF5.svg)](https://github.com/hacs/integration) -An integration for Home Assistant that uses OpenAI's GPT models to analyze your newly added entities and suggest potential automations. +An integration for Home Assistant that uses AI models to analyze your entities and suggest potential automations. Now supporting multiple providers including OpenAI, Google, and local models for enhanced privacy. --- @@ -30,47 +30,43 @@ Your support is greatly appreciated and helps maintain and improve this project! - [Installing via HACS (Recommended)](#installing-via-hacs-recommended) - [Manual Installation](#manual-installation) - [Configuration](#configuration) -- [Known Issues](#known-Issues) - [Usage](#usage) - [Important Notes](#important-notes) - [Troubleshooting](#troubleshooting) - [Roadmap](#roadmap) - - [Phase 1: Enhanced Entity Analysis](#phase-1-enhanced-entity-analysis) - - [Phase 2: Interactive Suggestion Management](#phase-2-interactive-suggestion-management) - - [Phase 3: Automated Automation Creation](#phase-3-automated-automation-creation) - [Future Enhancements](#future-enhancements) - - [Contributing to the Roadmap](#contributing-to-the-roadmap) - - [Timeline and Updates](#timeline-and-updates) - [License](#license) - [Acknowledgments](#acknowledgments) - [Contributions](#contributions) - [Disclaimer](#disclaimer) - [Support the Project](#support-the-project) +- [Additional Information](#additional-information) +- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq) --- ## Background and Purpose -Managing and automating devices in a smart home can be complex, especially as the number of devices grows. The **AI Automation Suggester** integration aims to simplify this process by leveraging OpenAI's GPT models to analyze newly added entities in your Home Assistant setup and provide intelligent automation suggestions. +Managing and automating devices in a smart home can be complex, especially as the number of devices grows. The **AI Automation Suggester** integration aims to simplify this process by leveraging AI models to analyze your entities in Home Assistant and provide intelligent automation suggestions. --- ## Features -- **Automatic Analysis**: Periodically scans for new entities and analyzes them using AI. -- **Automation Suggestions**: Provides clear and concise suggestions for potential automations. -- **Manual Trigger**: Allows you to manually trigger the AI analysis at any time. +- **Manual Analysis Trigger**: Allows you to manually trigger the AI analysis at any time, providing flexibility to implement an automation trigger that suits your needs. +- **Supports Multiple AI Providers**: Choose from a variety of AI models including OpenAI, Google, Groq, and local models like LocalAI and Ollama for privacy-focused users. +- **Custom LLM Variants**: Users can select their preferred AI model variant, such as OpenAI's `o1-preview`. - **Persistent Notifications**: Suggestions are delivered via Home Assistant's persistent notifications. - **Sensor Entity**: Creates a sensor entity to display the status and suggestions. -- **Configurable Scan Frequency**: Set how often the integration scans for new entities. -- **Supports OpenAI GPT Models**: Uses OpenAI's GPT models for analysis. +- **German Translation**: Added support for German language to reach a global audience. --- ## Prerequisites - **Home Assistant**: Version 2023.5 or later. -- **OpenAI API Key**: You need an OpenAI API key to use the AI processing. +- **API Keys**: Depending on the provider you choose, you may need API keys for OpenAI, Anthropic, Google, or Groq. +- **Local AI Setup**: For LocalAI and Ollama, you need to have the respective servers running on your local network. --- @@ -143,7 +139,8 @@ If you prefer to install the integration manually, follow these steps: ├── services.yaml ├── strings.json └── translations/ - └── en.json + ├── en.json + └── de.json ``` - If the `custom_components` directory doesn't exist, create it. @@ -164,55 +161,43 @@ If you prefer to install the integration manually, follow these steps: ### 2. **Configure the Integration** -- **Scan Frequency (hours)**: Set how often (in hours) the integration scans for new entities. Default is `24` hours. -- **OpenAI API Key**: Enter your OpenAI API key. -- **Scan Frequency (hours)**: Scan Frequency (hours): Set how often (in hours) the integration scans for new entities. Default is 24 hours. Set to 0 to disable automatic scanning. -- **Initial Lag Time**: Initial Lag Time (minutes): Set a delay before initial suggestions are generated after setup. Default is 10 minutes. -- **OpenAI API Key**: Enter your OpenAI API key. +- **Provider**: Choose your preferred AI provider from the list (OpenAI, Anthropic, Google, Groq, LocalAI, Ollama, or Custom OpenAI). +- **API Key or Server Details**: Depending on the provider, you may need to enter an API key or provide server details for local models. +- **Model Selection**: Choose the AI model variant you wish to use (e.g., OpenAI's `o1-preview`). +- **Custom System Prompt**: (Optional) Override the built-in system prompt with your own for more granular control. -### 3. **Obtain an OpenAI API Key** +### 3. **Obtain API Keys or Set Up Local AI Servers** -- Log in or sign up at the [OpenAI Platform](https://platform.openai.com/). -- Navigate to the [API Keys page](https://platform.openai.com/account/api-keys). -- Click on **Create new secret key** and copy the key. -- **Important**: Keep your API key secure and do not share it publicly. - ---- -## Known Issues - -### 1. **Generic Suggestions on Initial Setup** - -After the initial setup of the integration, the suggester will create a new persistent notification in Home Assistant. The initial suggestions are a generic list; however, you can manually trigger new suggestions that will look at your specific entities (or wait for the schedule to run). - -You can manually trigger this by going to Developer Tools -> Actions -> AI Automation Suggester: Generate Suggestions -> Perform Action. +- **OpenAI**: Obtain an API key from the [OpenAI Dashboard](https://platform.openai.com/account/api-keys). +- **Anthropic**: Sign up for an API key at [Anthropic](https://www.anthropic.com/). +- **Google**: Get an API key from the [Google Cloud Console](https://console.cloud.google.com/). +- **Groq**: Register and obtain an API key from [Groq](https://groq.com/). +- **LocalAI/Ollama**: Set up the respective servers on your local network. --- ## Usage -### Video Tutorial via @BeardedTinker YouTube Channel **With Thanks!** - -[![Watch the video](https://img.youtube.com/vi/-jnM33xQ3OQ/0.jpg)](https://www.youtube.com/watch?v=-jnM33xQ3OQ) +### 1. **Manual Trigger** +Since automatic scheduling has been removed for better flexibility and stability, you can now manually trigger the AI analysis: -### 1. **Automatic Suggestions** +- Go to **Developer Tools** > **Services**. +- Select `ai_suggester.generate_suggestions` from the list. +- In the service data, you can specify: -- The integration will automatically scan for new entities based on the configured scan frequency. -- When new entities are detected, it will analyze them and create automation suggestions. + - **Provider**: Override the default provider if desired. + - **System Prompt**: Provide a custom prompt to tailor the suggestions. + - **Entities**: Specify a list of entities to analyze. -### 2. **Manual Trigger** +- Click **Call Service**. -- You can manually trigger the AI analysis at any time: - - Go to **Developer Tools** > **Services**. - - Select `ai_suggester.generate_suggestions` from the list. - - Click **Call Service**. +### 2. **Implementing Automations** -### 3. **Viewing Suggestions** +- The integration will generate suggestions and deliver them via persistent notifications. +- Review the suggestions and implement the automations that suit your needs. -- Suggestions are delivered via Home Assistant's persistent notifications. -- You can also view suggestions in the `sensor.ai_automation_suggestions` entity's attributes. - -### 4. **Adding to Lovelace Dashboard** +### 3. **Adding to Lovelace Dashboard** - You can display the suggestions on your dashboard using an **Entities** card: @@ -226,25 +211,25 @@ You can manually trigger this by going to Developer Tools -> Actions -> AI Autom ## Important Notes -### **OpenAI API Key Security** +### **AI Provider API Key Security** -- **Do Not Share Your API Key**: Keep your OpenAI API key confidential. +- **Do Not Share Your API Keys**: Keep your API keys confidential. - **Revoking Compromised Keys**: If you suspect your API key has been compromised, revoke it immediately and generate a new one. -### **OpenAI API Usage** +### **API Usage** -- **Costs**: Using the OpenAI API may incur costs. Monitor your usage in the [OpenAI Dashboard](https://platform.openai.com/account/usage). -- **Usage Limits**: Set usage limits in your OpenAI account to avoid unexpected charges. +- **Costs**: Using AI provider APIs may incur costs. Monitor your usage in your provider's dashboard. +- **Usage Limits**: Set usage limits in your account to avoid unexpected charges. ### **Compatibility** -- **OpenAI Python Library**: The integration requires `openai>=1.0.0`. This is specified in the `manifest.json`. - **Home Assistant Version**: Ensure you are running Home Assistant version 2023.5 or later. +- **Local AI Models**: If using local models, ensure your local servers are correctly set up and accessible. ### **Data Privacy** -- **Data Sent to OpenAI**: The integration sends entity information to OpenAI's API for analysis. -- **User Consent**: By using this integration, you consent to this data being sent to OpenAI. +- **Data Sent to AI Providers**: The integration sends entity information to the selected AI provider's API for analysis. +- **User Consent**: By using this integration, you consent to this data being sent to the chosen AI provider. --- @@ -252,13 +237,14 @@ You can manually trigger this by going to Developer Tools -> Actions -> AI Autom ### **Common Issues** -1. **OpenAI API Errors** +1. **API Errors** - - **Symptom**: Error messages related to OpenAI API in notifications or logs. + - **Symptom**: Error messages related to AI provider APIs in notifications or logs. - **Solution**: - - Verify your OpenAI API key is correct. + - Verify your API key or server details are correct. - Ensure your API key has not expired or been revoked. - - Check your OpenAI account for any usage limits or account issues. + - Check your account for any usage limits or account issues. + - If using local models, ensure the server is running and accessible. 2. **Integration Not Showing Up** @@ -273,14 +259,14 @@ You can manually trigger this by going to Developer Tools -> Actions -> AI Autom - **Symptom**: The integration doesn't generate any suggestions. - **Solution**: - Manually trigger the service `ai_suggester.generate_suggestions`. - - Check if there are any new entities to analyze. + - Check if you have provided the necessary service data. - Review logs for any errors during the analysis. 4. **Dependency Issues** - - **Symptom**: Errors related to the OpenAI Python library version. + - **Symptom**: Errors related to missing dependencies or incorrect versions. - **Solution**: - - Ensure that the OpenAI library version is `>=1.0.0`. + - Ensure all required libraries are installed. - Clear Home Assistant's cache by deleting the `deps` directory and restart. ### **Logging and Debugging** @@ -292,7 +278,6 @@ You can manually trigger this by going to Developer Tools -> Actions -> AI Autom default: warning logs: custom_components.ai_suggester: debug - openai: debug ``` - View logs under **Settings** > **System** > **Logs**. @@ -301,111 +286,27 @@ You can manually trigger this by going to Developer Tools -> Actions -> AI Autom ## Roadmap -We have an ambitious roadmap for the **AI Automation Suggester** integration to enhance its capabilities and provide even more value to Home Assistant users. Below is a list of planned features and improvements: - ---- - -### **Phase 1: Enhanced Entity Analysis** - -#### **1. Comprehensive Integration and Sensor Discovery** - -- **Objective**: Extend the integration to analyze all available integrations, sensors, and automations in the user's Home Assistant setup. -- **Details**: - - Collect detailed information about existing entities and their states. - - Understand the relationships and dependencies between different entities. - - Identify potential areas where automations could enhance the smart home experience. - -#### **2. Advanced Automation Suggestions** - -- **Objective**: Provide more powerful and personalized automation suggestions based on the comprehensive analysis. -- **Details**: - - Use AI to detect patterns and usage habits. - - Suggest automations that can improve efficiency, security, and convenience. - - Include suggestions for energy savings, routine automation, and proactive alerts. - ---- - -### **Phase 2: Interactive Suggestion Management** - -#### **1. User Feedback Mechanism** - -- **Objective**: Allow users to like or dislike the suggested automations to refine future suggestions. -- **Details**: - - Implement a user interface where suggestions are listed with options to like or dislike. - - Use feedback to improve the AI model's understanding of user preferences. - - Store feedback securely and respect user privacy. - -#### **2. Detailed Implementation Guides** - -- **Objective**: For liked suggestions, provide concise and clear instructions on how to implement the automation. -- **Details**: - - Break down the steps required to create the automation within Home Assistant. - - Include code snippets, configuration examples, and screenshots where applicable. - - Explain the desired outcome and how the automation enhances the user's smart home. - ---- - -### **Phase 3: Automated Automation Creation** - -#### **1. One-Click Automation Deployment** - -- **Objective**: Enable users to automatically implement the suggested automations directly from the integration. -- **Details**: - - Integrate with Home Assistant's automation editor to create automations programmatically. - - Ensure automations are created following best practices and are easily editable by the user. - - Provide options for users to review and confirm automations before deployment. - -#### **2. Safety and Privacy Measures** - -- **Objective**: Implement safeguards to ensure that automations are created securely and do not compromise the user's system. -- **Details**: - - Include confirmation dialogs and summaries before making changes. - - Ensure the integration adheres to Home Assistant's security guidelines. - - Provide options to rollback changes if needed. - ---- - ### **Future Enhancements** -#### **1. Local AI Processing** +1. **Interactive Suggestion Management** -- **Objective**: Develop local AI processing capabilities to reduce reliance on cloud services. -- **Details**: - - Explore the use of local machine learning models compatible with Home Assistant's architecture. - - Improve response times and reduce costs associated with cloud AI usage. - - Enhance user privacy by keeping data processing local. + - **User Feedback Mechanism**: Allow users to provide feedback on suggestions to improve future results. + - **Detailed Implementation Guides**: Provide step-by-step instructions for implementing suggested automations. -#### **2. Multi-Language Support** +2. **Automated Automation Creation** -- **Objective**: Support multiple languages to cater to a global user base. -- **Details**: - - Translate the integration's interface and messages into other languages. - - Ensure AI-generated suggestions are provided in the user's preferred language. - - Collaborate with the community for translations and localization efforts. + - **One-Click Deployment**: Enable users to automatically implement suggested automations. + - **Safety Measures**: Implement safeguards to ensure automations are created securely. -#### **3. Community Integration Sharing** +3. **Enhanced Localization** -- **Objective**: Allow users to share their automations and suggestions with the community. -- **Details**: - - Create a platform or integrate with existing platforms to share and discover automations. - - Enable users to benefit from community-driven ideas and solutions. - - Implement moderation and quality control mechanisms. + - **Additional Language Support**: Expand language support beyond English and German. + - **Community Translations**: Collaborate with the community for translations and localization efforts. ---- +4. **Community Integration Sharing** -## Contributing to the Roadmap - -We welcome contributions and feedback from the community to help shape the future of the **AI Automation Suggester** integration. If you have ideas, feature requests, or would like to contribute to the development, please open an issue or submit a pull request on our [GitHub repository](https://github.com/ITSpecialist111/ai_automation_suggester). - ---- - -## Timeline and Updates - -We aim to implement these features progressively, with regular updates provided through the repository. Please check back frequently for the latest news and release notes. - ---- - -**Note:** The features listed in this roadmap are subject to change based on feasibility, user feedback, and ongoing development efforts. Our goal is to provide the most valuable and user-friendly experience possible. + - **Platform for Sharing**: Allow users to share their automations and suggestions with the community. + - **Moderation and Quality Control**: Implement mechanisms to ensure shared content is valuable and safe. --- @@ -418,7 +319,7 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file ## Acknowledgments - **Home Assistant Community**: For providing an amazing platform and community support. -- **OpenAI**: For their powerful AI models and APIs. +- **AI Providers**: OpenAI, Anthropic, Google, Groq, LocalAI, and Ollama for their AI models and APIs. --- @@ -430,7 +331,7 @@ Contributions are welcome! Please open an issue or submit a pull request on [Git ## Disclaimer -This integration is a third-party custom component and is not affiliated with or endorsed by Home Assistant or OpenAI. +This integration is a third-party custom component and is not affiliated with or endorsed by Home Assistant or any of the AI providers. --- @@ -446,13 +347,13 @@ Your support is greatly appreciated and helps maintain and improve this project! --- -# Additional Information +## Additional Information For any questions or support, please open an issue on [GitHub](https://github.com/ITSpecialist111/ai_automation_suggester/issues). --- -# Frequently Asked Questions (FAQ) +## Frequently Asked Questions (FAQ) ### **1. How do I update the integration when a new version is released?** @@ -461,21 +362,25 @@ For any questions or support, please open an issue on [GitHub](https://github.co - Find **AI Automation Suggester** in the list. - If an update is available, click **Update**. -### **2. Can I use this integration without an OpenAI API key?** +### **2. Can I use this integration without an API key?** -- No, an OpenAI API key is required for the integration to function, as it uses OpenAI's GPT models to generate suggestions. +- Yes, if you choose to use local AI models like LocalAI or Ollama, you do not need an external API key. However, you need to have the local servers set up and running. ### **3. Is my data safe when using this integration?** -- The integration sends entity information to OpenAI's API for analysis. While OpenAI has robust privacy and security measures, you should review their [privacy policy](https://openai.com/policies/privacy-policy) to understand how your data is handled. +- The integration sends entity information to the selected AI provider's API for analysis. If you use local models, your data remains within your local network. For cloud providers, you should review their privacy policies to understand how your data is handled. ### **4. I found a bug or have a feature request. How can I contribute?** - Please open an issue on the [GitHub repository](https://github.com/ITSpecialist111/ai_automation_suggester/issues) with details about the bug or your feature request. -### **5. Does the integration support local AI processing?** +### **5. How can I add support for another language?** + +- We welcome community contributions for translations. Please submit a pull request with the new language files in the `translations` directory. + +### **6. Why was automatic scheduling removed?** -- Currently, the integration only supports cloud-based AI processing using OpenAI's API. Local AI processing is planned for future updates. +- Automatic scheduling was removed to provide more stability and flexibility. Users can now implement their own triggers for the AI analysis, allowing for a more customized experience. --- diff --git a/custom_components/ai_automation_suggester/__init__.py b/custom_components/ai_automation_suggester/__init__.py index ddf205c..7df7d44 100644 --- a/custom_components/ai_automation_suggester/__init__.py +++ b/custom_components/ai_automation_suggester/__init__.py @@ -1,60 +1,121 @@ -"""The AI Automation Suggester integration.""" +# custom_components/ai_automation_suggester/__init__.py +"""The AI Automation Suggester integration.""" import logging from homeassistant.config_entries import ConfigEntry -from homeassistant.core import HomeAssistant -from .const import DOMAIN, PLATFORMS +from homeassistant.core import HomeAssistant, ServiceCall +from homeassistant.exceptions import ConfigEntryNotReady, ServiceValidationError +from homeassistant.helpers.typing import ConfigType + +from .const import ( + DOMAIN, + PLATFORMS, + CONF_PROVIDER, + SERVICE_GENERATE_SUGGESTIONS, + ATTR_PROVIDER_CONFIG, + ATTR_CUSTOM_PROMPT, +) from .coordinator import AIAutomationCoordinator _LOGGER = logging.getLogger(__name__) -async def async_setup(hass: HomeAssistant, config: dict): +async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool: """Set up the AI Automation Suggester component.""" hass.data.setdefault(DOMAIN, {}) - return True - -async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry): - """Set up AI Automation Suggester from a config entry.""" - coordinator = AIAutomationCoordinator(hass, entry) - hass.data[DOMAIN][entry.entry_id] = coordinator - - await coordinator.async_config_entry_first_refresh() - - await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS) - - async def handle_generate_suggestions(call): - """Handle the service call to generate suggestions.""" - await coordinator.async_request_refresh() - hass.services.async_register(DOMAIN, "generate_suggestions", handle_generate_suggestions) + async def handle_generate_suggestions(call: ServiceCall) -> None: + """Handle the generate_suggestions service call.""" + provider_config = call.data.get(ATTR_PROVIDER_CONFIG) + custom_prompt = call.data.get(ATTR_CUSTOM_PROMPT) + + try: + coordinator = None + if provider_config: + coordinator = hass.data[DOMAIN][provider_config] + else: + for entry_id, coord in hass.data[DOMAIN].items(): + if isinstance(coord, AIAutomationCoordinator): + coordinator = coord + break + + if coordinator is None: + raise ServiceValidationError("No AI Automation Suggester provider configured") + + if custom_prompt: + original_prompt = coordinator.SYSTEM_PROMPT + try: + coordinator.SYSTEM_PROMPT = custom_prompt + await coordinator.async_request_refresh() + finally: + coordinator.SYSTEM_PROMPT = original_prompt + else: + await coordinator.async_request_refresh() + + except KeyError: + raise ServiceValidationError(f"Provider configuration not found") + except Exception as err: + raise ServiceValidationError(f"Failed to generate suggestions: {err}") + + # Register the service + hass.services.async_register( + DOMAIN, + SERVICE_GENERATE_SUGGESTIONS, + handle_generate_suggestions + ) return True -async def async_migrate_entry(hass: HomeAssistant, entry: ConfigEntry): - """Migrate old entry.""" - _LOGGER.debug(f"Starting migration for entry version {entry.version}") - - if entry.version == 1: - # Example: If moving from version 1 to 2, make changes to the data - new_data = {**entry.data} +async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool: + """Set up AI Automation Suggester from a config entry.""" + try: + # Ensure required config values are present + if CONF_PROVIDER not in entry.data: + raise ConfigEntryNotReady("Provider not specified in config") + + # Create and store coordinator + coordinator = AIAutomationCoordinator(hass, entry) + hass.data[DOMAIN][entry.entry_id] = coordinator + + # Set up platforms + for platform in PLATFORMS: + try: + await hass.config_entries.async_forward_entry_setup(entry, platform) + except Exception as err: + _LOGGER.error("Failed to setup platform %s: %s", platform, err) + raise ConfigEntryNotReady from err + + _LOGGER.debug( + "Setup complete for %s with provider %s", + entry.title, + entry.data.get(CONF_PROVIDER) + ) + + entry.async_on_unload(entry.add_update_listener(async_reload_entry)) - # Handle any changes in your schema or structure - if 'scan_frequency' not in new_data: - new_data['scan_frequency'] = 24 # Set a default scan frequency if it doesn't exist + return True - if 'initial_lag_time' not in new_data: - new_data['initial_lag_time'] = 10 # Add default lag time if missing + except Exception as err: + _LOGGER.error("Failed to setup integration: %s", err) + raise ConfigEntryNotReady from err - # Update the entry data - entry.version = 2 - hass.config_entries.async_update_entry(entry, data=new_data) +async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool: + """Unload a config entry.""" + try: + # Unload platforms + unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS) + + if unload_ok: + # Clean up coordinator + coordinator = hass.data[DOMAIN].pop(entry.entry_id) + await coordinator.async_shutdown() - _LOGGER.info(f"Migration to version {entry.version} successful") + return unload_ok - return True + except Exception as err: + _LOGGER.error("Error unloading entry: %s", err) + return False -async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry): - """Unload a config entry.""" - unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS) - hass.data[DOMAIN].pop(entry.entry_id) - return unload_ok +async def async_reload_entry(hass: HomeAssistant, entry: ConfigEntry) -> None: + """Reload config entry.""" + await async_unload_entry(hass, entry) + await async_setup_entry(hass, entry) diff --git a/custom_components/ai_automation_suggester/config_flow.py b/custom_components/ai_automation_suggester/config_flow.py index 1d4c675..2026244 100644 --- a/custom_components/ai_automation_suggester/config_flow.py +++ b/custom_components/ai_automation_suggester/config_flow.py @@ -1,58 +1,438 @@ +# custom_components/ai_automation_suggester/config_flow.py + """Config flow for AI Automation Suggester integration.""" import logging import voluptuous as vol +from typing import Any, Dict, Optional + from homeassistant import config_entries from homeassistant.core import callback -from .const import DOMAIN +from homeassistant.exceptions import ServiceValidationError +from homeassistant.helpers.aiohttp_client import async_get_clientsession + +from .const import ( + DOMAIN, + CONF_PROVIDER, + CONF_OPENAI_API_KEY, + CONF_OPENAI_MODEL, + CONF_ANTHROPIC_API_KEY, + CONF_ANTHROPIC_MODEL, + CONF_GOOGLE_API_KEY, + CONF_GOOGLE_MODEL, + CONF_GROQ_API_KEY, + CONF_GROQ_MODEL, + CONF_LOCALAI_IP_ADDRESS, + CONF_LOCALAI_PORT, + CONF_LOCALAI_HTTPS, + CONF_LOCALAI_MODEL, + CONF_OLLAMA_IP_ADDRESS, + CONF_OLLAMA_PORT, + CONF_OLLAMA_HTTPS, + CONF_OLLAMA_MODEL, + CONF_CUSTOM_OPENAI_ENDPOINT, + CONF_CUSTOM_OPENAI_API_KEY, + CONF_CUSTOM_OPENAI_MODEL, + DEFAULT_MODELS, + VERSION_ANTHROPIC, +) _LOGGER = logging.getLogger(__name__) +class ProviderValidator: + """Validate provider configurations.""" + def __init__(self, hass): + """Initialize validator.""" + self.hass = hass + self.session = async_get_clientsession(hass) + + async def validate_openai(self, api_key: str) -> bool: + """Validate OpenAI configuration.""" + headers = { + 'Authorization': f"Bearer {api_key}", + 'Content-Type': 'application/json', + } + try: + _LOGGER.debug("Validating OpenAI API key") + response = await self.session.get( + "https://api.openai.com/v1/models", + headers=headers + ) + is_valid = response.status == 200 + _LOGGER.debug("OpenAI validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("OpenAI validation error: %s", err) + return False + + async def validate_anthropic(self, api_key: str) -> bool: + """Validate Anthropic configuration.""" + headers = { + 'x-api-key': api_key, + 'anthropic-version': VERSION_ANTHROPIC, + 'content-type': 'application/json' + } + try: + _LOGGER.debug("Validating Anthropic API key") + response = await self.session.post( + "https://api.anthropic.com/v1/complete", + headers=headers, + json={ + "prompt": "\n\nTest", + "model": DEFAULT_MODELS["Anthropic"], + "max_tokens_to_sample": 1, + } + ) + is_valid = response.status == 200 + _LOGGER.debug("Anthropic validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("Anthropic validation error: %s", err) + return False + + async def validate_google(self, api_key: str) -> bool: + """Validate Google configuration.""" + headers = { + 'Authorization': f"Bearer {api_key}", + 'Content-Type': 'application/json', + } + try: + _LOGGER.debug("Validating Google API key") + # Placeholder URL; replace with the actual Google API endpoint + response = await self.session.get( + "https://api.google.com/v1/models", + headers=headers + ) + is_valid = response.status == 200 + _LOGGER.debug("Google validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("Google validation error: %s", err) + return False + + async def validate_groq(self, api_key: str) -> bool: + """Validate Groq configuration.""" + headers = { + 'Authorization': f"Bearer {api_key}", + 'Content-Type': 'application/json', + } + try: + _LOGGER.debug("Validating Groq API key") + # Placeholder URL; replace with the actual Groq API endpoint + response = await self.session.get( + "https://api.groq.com/v1/models", + headers=headers + ) + is_valid = response.status == 200 + _LOGGER.debug("Groq validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("Groq validation error: %s", err) + return False + + async def validate_localai( + self, ip_address: str, port: int, https: bool = False + ) -> bool: + """Validate LocalAI configuration.""" + protocol = "https" if https else "http" + url = f"{protocol}://{ip_address}:{port}/v1/models" + try: + _LOGGER.debug("Validating LocalAI connection to %s", url) + response = await self.session.get(url) + is_valid = response.status == 200 + _LOGGER.debug("LocalAI validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("LocalAI validation error: %s", err) + return False + + async def validate_ollama( + self, ip_address: str, port: int, https: bool = False + ) -> bool: + """Validate Ollama configuration.""" + protocol = "https" if https else "http" + url = f"{protocol}://{ip_address}:{port}/api/tags" + try: + _LOGGER.debug("Validating Ollama connection to %s", url) + response = await self.session.get(url) + is_valid = response.status == 200 + _LOGGER.debug("Ollama validation result: %s", is_valid) + return is_valid + except Exception as err: + _LOGGER.error("Ollama validation error: %s", err) + return False + class AIAutomationConfigFlow(config_entries.ConfigFlow, domain=DOMAIN): """Handle a config flow for AI Automation Suggester.""" - VERSION = 1.06 - - async def async_step_user(self, user_input=None): - """Handle the initial step.""" - errors = {} - if user_input is not None: - if not user_input.get("use_local_ai") and not user_input.get("openai_api_key"): - errors["openai_api_key"] = "required" - else: - return self.async_create_entry(title="AI Automation Suggester", data=user_input) + VERSION = 1 - data_schema = vol.Schema({ - vol.Required("scan_frequency", default=24): vol.All(vol.Coerce(int), vol.Range(min=0)), - vol.Required("initial_lag_time", default=10): vol.All(vol.Coerce(int), vol.Range(min=0, max=60)), - vol.Required("use_local_ai", default=False): bool, - vol.Optional("openai_api_key"): str, - }) - return self.async_show_form(step_id="user", data_schema=data_schema, errors=errors) + def __init__(self): + """Initialize config flow.""" + self.provider = None + self.data = {} + self.validator = None @staticmethod @callback def async_get_options_flow(config_entry): + """Get the options flow for this handler.""" return AIAutomationOptionsFlowHandler(config_entry) + async def async_step_user(self, user_input: Optional[Dict[str, Any]] = None): + """Handle the initial step.""" + errors = {} + + if user_input is not None: + self.provider = user_input[CONF_PROVIDER] + self.data.update(user_input) + + # Move to provider-specific configuration + provider_steps = { + "OpenAI": self.async_step_openai, + "Anthropic": self.async_step_anthropic, + "Google": self.async_step_google, + "Groq": self.async_step_groq, + "LocalAI": self.async_step_localai, + "Ollama": self.async_step_ollama, + "Custom OpenAI": self.async_step_custom_openai, + } + return await provider_steps[self.provider]() + + providers = ["OpenAI", "Anthropic", "Google", "Groq", "LocalAI", "Ollama", "Custom OpenAI"] + return self.async_show_form( + step_id="user", + data_schema=vol.Schema({ + vol.Required(CONF_PROVIDER): vol.In(providers), + }), + errors=errors + ) + + async def async_step_openai(self, user_input: Optional[Dict[str, Any]] = None): + """Configure OpenAI settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_openai(user_input[CONF_OPENAI_API_KEY]) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (OpenAI)", + data=self.data + ) + errors["base"] = "invalid_auth" + + return self.async_show_form( + step_id="openai", + data_schema=vol.Schema({ + vol.Required(CONF_OPENAI_API_KEY): str, + vol.Optional(CONF_OPENAI_MODEL, default=DEFAULT_MODELS["OpenAI"]): str, + }), + errors=errors + ) + + async def async_step_anthropic(self, user_input: Optional[Dict[str, Any]] = None): + """Configure Anthropic settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_anthropic( + user_input[CONF_ANTHROPIC_API_KEY] + ) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (Anthropic)", + data=self.data + ) + errors["base"] = "invalid_auth" + + return self.async_show_form( + step_id="anthropic", + data_schema=vol.Schema({ + vol.Required(CONF_ANTHROPIC_API_KEY): str, + vol.Optional(CONF_ANTHROPIC_MODEL, default=DEFAULT_MODELS["Anthropic"]): str, + }), + errors=errors + ) + + async def async_step_google(self, user_input: Optional[Dict[str, Any]] = None): + """Configure Google settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_google(user_input[CONF_GOOGLE_API_KEY]) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (Google)", + data=self.data + ) + errors["base"] = "invalid_auth" + + return self.async_show_form( + step_id="google", + data_schema=vol.Schema({ + vol.Required(CONF_GOOGLE_API_KEY): str, + vol.Optional(CONF_GOOGLE_MODEL, default=DEFAULT_MODELS["Google"]): str, + }), + errors=errors + ) + + async def async_step_groq(self, user_input: Optional[Dict[str, Any]] = None): + """Configure Groq settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_groq(user_input[CONF_GROQ_API_KEY]) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (Groq)", + data=self.data + ) + errors["base"] = "invalid_auth" + + return self.async_show_form( + step_id="groq", + data_schema=vol.Schema({ + vol.Required(CONF_GROQ_API_KEY): str, + vol.Optional(CONF_GROQ_MODEL, default=DEFAULT_MODELS["Groq"]): str, + }), + errors=errors + ) + + async def async_step_localai(self, user_input: Optional[Dict[str, Any]] = None): + """Configure LocalAI settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_localai( + user_input[CONF_LOCALAI_IP_ADDRESS], + user_input[CONF_LOCALAI_PORT], + user_input[CONF_LOCALAI_HTTPS] + ) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (LocalAI)", + data=self.data + ) + errors["base"] = "cannot_connect" + + return self.async_show_form( + step_id="localai", + data_schema=vol.Schema({ + vol.Required(CONF_LOCALAI_IP_ADDRESS): str, + vol.Required(CONF_LOCALAI_PORT, default=8080): int, + vol.Required(CONF_LOCALAI_HTTPS, default=False): bool, + vol.Optional(CONF_LOCALAI_MODEL, default=DEFAULT_MODELS["LocalAI"]): str, + }), + errors=errors + ) + + async def async_step_ollama(self, user_input: Optional[Dict[str, Any]] = None): + """Configure Ollama settings.""" + errors = {} + + if user_input is not None: + self.validator = ProviderValidator(self.hass) + is_valid = await self.validator.validate_ollama( + user_input[CONF_OLLAMA_IP_ADDRESS], + user_input[CONF_OLLAMA_PORT], + user_input[CONF_OLLAMA_HTTPS] + ) + + if is_valid: + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (Ollama)", + data=self.data + ) + errors["base"] = "cannot_connect" + + return self.async_show_form( + step_id="ollama", + data_schema=vol.Schema({ + vol.Required(CONF_OLLAMA_IP_ADDRESS): str, + vol.Required(CONF_OLLAMA_PORT, default=11434): int, + vol.Required(CONF_OLLAMA_HTTPS, default=False): bool, + vol.Optional(CONF_OLLAMA_MODEL, default=DEFAULT_MODELS["Ollama"]): str, + }), + errors=errors + ) + + async def async_step_custom_openai(self, user_input: Optional[Dict[str, Any]] = None): + """Configure Custom OpenAI settings.""" + errors = {} + + if user_input is not None: + # Minimal validation; you can add more if necessary + self.data.update(user_input) + return self.async_create_entry( + title="AI Automation Suggester (Custom OpenAI)", + data=self.data + ) + + return self.async_show_form( + step_id="custom_openai", + data_schema=vol.Schema({ + vol.Required(CONF_CUSTOM_OPENAI_ENDPOINT): str, + vol.Optional(CONF_CUSTOM_OPENAI_API_KEY): str, + vol.Optional(CONF_CUSTOM_OPENAI_MODEL, default=DEFAULT_MODELS["Custom OpenAI"]): str, + }), + errors=errors + ) + class AIAutomationOptionsFlowHandler(config_entries.OptionsFlow): - """Handle options flow.""" - + """Handle options for the AI Automation Suggester.""" + def __init__(self, config_entry): """Initialize options flow.""" self.config_entry = config_entry - async def async_step_init(self, user_input=None): - """Manage the AI Automation Suggester options.""" + async def async_step_init(self, user_input: Optional[Dict[str, Any]] = None): + """Manage options.""" if user_input is not None: return self.async_create_entry(title="", data=user_input) - data_schema = vol.Schema({ - vol.Required("scan_frequency", default=self.config_entry.options.get("scan_frequency", 24)): - vol.All(vol.Coerce(int), vol.Range(min=0)), - vol.Required("initial_lag_time", default=self.config_entry.options.get("initial_lag_time", 10)): - vol.All(vol.Coerce(int), vol.Range(min=0, max=60)), - vol.Required("use_local_ai", default=self.config_entry.options.get("use_local_ai", False)): bool, - vol.Optional("openai_api_key", default=self.config_entry.options.get("openai_api_key", "")): str, - }) + provider = self.config_entry.data.get(CONF_PROVIDER) + options = {} + + # Add provider-specific options including model selection + if provider == "OpenAI": + options[vol.Optional(CONF_OPENAI_API_KEY)] = str + options[vol.Optional(CONF_OPENAI_MODEL, default=self.config_entry.data.get(CONF_OPENAI_MODEL, DEFAULT_MODELS["OpenAI"]))] = str + elif provider == "Anthropic": + options[vol.Optional(CONF_ANTHROPIC_API_KEY)] = str + options[vol.Optional(CONF_ANTHROPIC_MODEL, default=self.config_entry.data.get(CONF_ANTHROPIC_MODEL, DEFAULT_MODELS["Anthropic"]))] = str + elif provider == "Google": + options[vol.Optional(CONF_GOOGLE_API_KEY)] = str + options[vol.Optional(CONF_GOOGLE_MODEL, default=self.config_entry.data.get(CONF_GOOGLE_MODEL, DEFAULT_MODELS["Google"]))] = str + elif provider == "Groq": + options[vol.Optional(CONF_GROQ_API_KEY)] = str + options[vol.Optional(CONF_GROQ_MODEL, default=self.config_entry.data.get(CONF_GROQ_MODEL, DEFAULT_MODELS["Groq"]))] = str + elif provider == "LocalAI": + options[vol.Optional(CONF_LOCALAI_HTTPS)] = bool + options[vol.Optional(CONF_LOCALAI_MODEL, default=self.config_entry.data.get(CONF_LOCALAI_MODEL, DEFAULT_MODELS["LocalAI"]))] = str + elif provider == "Ollama": + options[vol.Optional(CONF_OLLAMA_HTTPS)] = bool + options[vol.Optional(CONF_OLLAMA_MODEL, default=self.config_entry.data.get(CONF_OLLAMA_MODEL, DEFAULT_MODELS["Ollama"]))] = str + elif provider == "Custom OpenAI": + options[vol.Optional(CONF_CUSTOM_OPENAI_ENDPOINT)] = str + options[vol.Optional(CONF_CUSTOM_OPENAI_API_KEY)] = str + options[vol.Optional(CONF_CUSTOM_OPENAI_MODEL, default=self.config_entry.data.get(CONF_CUSTOM_OPENAI_MODEL, DEFAULT_MODELS["Custom OpenAI"]))] = str - return self.async_show_form(step_id="init", data_schema=data_schema) + return self.async_show_form( + step_id="init", + data_schema=vol.Schema(options) + ) diff --git a/custom_components/ai_automation_suggester/const.py b/custom_components/ai_automation_suggester/const.py index 0abb32c..b529fdf 100644 --- a/custom_components/ai_automation_suggester/const.py +++ b/custom_components/ai_automation_suggester/const.py @@ -1,4 +1,86 @@ +# custom_components/ai_automation_suggester/const.py + """Constants for the AI Automation Suggester integration.""" DOMAIN = "ai_automation_suggester" PLATFORMS = ["sensor"] + +# Provider configuration +CONF_PROVIDER = "provider" + +# OpenAI specific +CONF_OPENAI_API_KEY = "openai_api_key" +CONF_OPENAI_MODEL = "openai_model" + +# Anthropic specific +CONF_ANTHROPIC_API_KEY = "anthropic_api_key" +CONF_ANTHROPIC_MODEL = "anthropic_model" +VERSION_ANTHROPIC = "2023-06-01" + +# Google specific +CONF_GOOGLE_API_KEY = "google_api_key" +CONF_GOOGLE_MODEL = "google_model" + +# Groq specific +CONF_GROQ_API_KEY = "groq_api_key" +CONF_GROQ_MODEL = "groq_model" + +# LocalAI specific +CONF_LOCALAI_IP_ADDRESS = "localai_ip" +CONF_LOCALAI_PORT = "localai_port" +CONF_LOCALAI_HTTPS = "localai_https" +CONF_LOCALAI_MODEL = "localai_model" + +# Ollama specific +CONF_OLLAMA_IP_ADDRESS = "ollama_ip" +CONF_OLLAMA_PORT = "ollama_port" +CONF_OLLAMA_HTTPS = "ollama_https" +CONF_OLLAMA_MODEL = "ollama_model" + +# Custom OpenAI specific +CONF_CUSTOM_OPENAI_ENDPOINT = "custom_openai_endpoint" +CONF_CUSTOM_OPENAI_API_KEY = "custom_openai_api_key" +CONF_CUSTOM_OPENAI_MODEL = "custom_openai_model" + +# Model Defaults +DEFAULT_MODELS = { + "OpenAI": "gpt-4o-mini", + "Anthropic": "claude-2", + "Google": "gemini-1.5", + "Groq": "groq-0.8", + "LocalAI": "llama3", + "Ollama": "llama2", + "Custom OpenAI": "gpt-3.5-turbo" +} + +# Error Messages +ERROR_INVALID_API_KEY = "Invalid API key" +ERROR_CONNECTION_FAILED = "Could not connect to server" +ERROR_INVALID_CONFIG = "Invalid configuration" + +# Service attributes +ATTR_PROVIDER_CONFIG = "provider_config" +ATTR_CUSTOM_PROMPT = "custom_prompt" + +SERVICE_GENERATE_SUGGESTIONS = "generate_suggestions" + +# Provider statuses +PROVIDER_STATUS_CONNECTED = "connected" +PROVIDER_STATUS_DISCONNECTED = "disconnected" +PROVIDER_STATUS_ERROR = "error" + +# Event types +EVENT_NEW_SUGGESTION = f"{DOMAIN}_new_suggestion" +EVENT_PROVIDER_STATUS_CHANGE = f"{DOMAIN}_provider_status_change" + +# Configuration defaults +DEFAULT_MAX_TOKENS = 500 +DEFAULT_TEMPERATURE = 0.7 + +# API Endpoints +ENDPOINT_OPENAI = "https://api.openai.com/v1/chat/completions" +ENDPOINT_ANTHROPIC = "https://api.anthropic.com/v1/messages" +ENDPOINT_GOOGLE = "https://generativelanguage.googleapis.com/v1beta2/models/{model}:generateText?key={api_key}" +ENDPOINT_GROQ = "https://api.groq.com/openai/v1/chat/completions" +ENDPOINT_LOCALAI = "{base_url}/v1/chat/completions" +ENDPOINT_OLLAMA = "{base_url}/api/chat" diff --git a/custom_components/ai_automation_suggester/coordinator.py b/custom_components/ai_automation_suggester/coordinator.py index 43fc2e5..569d3df 100644 --- a/custom_components/ai_automation_suggester/coordinator.py +++ b/custom_components/ai_automation_suggester/coordinator.py @@ -1,160 +1,552 @@ +# custom_components/ai_automation_suggester/coordinator.py + """Coordinator for AI Automation Suggester.""" import logging -from datetime import timedelta, datetime -from homeassistant.components.persistent_notification import async_create +from datetime import datetime +import aiohttp +import json +from homeassistant.components import persistent_notification from homeassistant.core import HomeAssistant from homeassistant.helpers.update_coordinator import DataUpdateCoordinator -from .const import DOMAIN +from homeassistant.helpers.aiohttp_client import async_get_clientsession + +from .const import ( + DOMAIN, + CONF_PROVIDER, + DEFAULT_MODELS, + CONF_OPENAI_API_KEY, + CONF_OPENAI_MODEL, + CONF_ANTHROPIC_API_KEY, + CONF_ANTHROPIC_MODEL, + CONF_GOOGLE_API_KEY, + CONF_GOOGLE_MODEL, + CONF_GROQ_API_KEY, + CONF_GROQ_MODEL, + CONF_LOCALAI_IP_ADDRESS, + CONF_LOCALAI_PORT, + CONF_LOCALAI_HTTPS, + CONF_LOCALAI_MODEL, + CONF_OLLAMA_IP_ADDRESS, + CONF_OLLAMA_PORT, + CONF_OLLAMA_HTTPS, + CONF_OLLAMA_MODEL, + CONF_CUSTOM_OPENAI_ENDPOINT, + CONF_CUSTOM_OPENAI_API_KEY, + CONF_CUSTOM_OPENAI_MODEL, + DEFAULT_MAX_TOKENS, + DEFAULT_TEMPERATURE, + VERSION_ANTHROPIC, + ENDPOINT_OPENAI, + ENDPOINT_ANTHROPIC, + ENDPOINT_GOOGLE, + ENDPOINT_GROQ, + ENDPOINT_LOCALAI, + ENDPOINT_OLLAMA, +) _LOGGER = logging.getLogger(__name__) +SYSTEM_PROMPT = """You are an AI assistant that generates Home Assistant automations +based on the types of new entities discovered in the system. Your goal +is to provide detailed and useful automation suggestions tailored to +the specific types and functions of the entities, avoiding generic recommendations. + +For each entity: +1. Understand its function (e.g., sensor, switch, light, climate control). +2. Consider its current state (e.g., 'on', 'off', 'open', 'closed', 'temperature'). +3. Suggest automations based on common use cases for similar entities. +4. Avoid generic suggestions. Instead, provide detailed scenarios such as: + - 'If the front door sensor detects it is open for more than 5 minutes, send a notification.' + - 'If no motion is detected for 10 minutes, turn off all lights.' + - 'If the temperature sensor detects a rise above 25°C, turn on the air conditioner.' +5. Consider combining multiple entities to create context-aware automations. +6. Include appropriate conditions and triggers for time of day, presence, or other contextual factors. +7. Format suggestions in clear, implementable steps. +8. When suggesting scenes, include all relevant entities that should be controlled. +9. Consider energy efficiency and user convenience in your suggestions. +10. Include the actual entity IDs in your suggestions so they can be easily implemented. +11. Suggest automations that make sense based on the entity's domain and capabilities. +12. Consider security implications for sensitive automations (like doors or windows).""" + class AIAutomationCoordinator(DataUpdateCoordinator): """Class to manage fetching data from AI model.""" - def __init__(self, hass: HomeAssistant, entry): + def __init__(self, hass: HomeAssistant, entry) -> None: """Initialize.""" self.hass = hass self.entry = entry - self.previous_entities = {} # Initialize previous_entities to an empty dictionary - self.last_update = None # Track the last update time - scan_frequency = entry.data.get("scan_frequency", 24) - initial_lag_time = entry.data.get("initial_lag_time", 10) # Default to 10 minutes lag + self.previous_entities = {} + self.last_update = None + self.SYSTEM_PROMPT = SYSTEM_PROMPT + + # Initialize data + self.data = { + "suggestions": "No suggestions yet", + "last_update": None, + "entities_processed": [], + "provider": entry.data.get(CONF_PROVIDER, "unknown") + } - if scan_frequency == 0: - self.update_interval = None # Disable automatic updates - else: - self.update_interval = timedelta(hours=scan_frequency) + # Prevent automatic updates by setting update_interval to None + self.update_interval = None - super().__init__(hass, _LOGGER, name=DOMAIN, update_interval=self.update_interval) + self.session = async_get_clientsession(hass) - # Set the initial update delay - if initial_lag_time > 0: - _LOGGER.debug(f"Delaying initial suggestions by {initial_lag_time} minutes.") - self.hass.loop.call_later(initial_lag_time * 60, self.async_request_refresh) + super().__init__( + hass, + _LOGGER, + name=DOMAIN, + update_interval=self.update_interval, + ) async def _async_update_data(self): """Fetch data from AI model.""" - current_time = datetime.now() + try: + current_time = datetime.now() + + _LOGGER.debug("Starting manual update at %s", current_time) - # Check if scan frequency has passed, but only if update_interval is not None - if self.update_interval is not None and self.last_update and current_time - self.last_update < self.update_interval: - _LOGGER.debug("Skipping update, scan frequency interval not reached.") - return self.previous_entities # Return previous data without update + self.last_update = current_time - # Proceed with the regular update process - self.last_update = current_time # Update the last fetch time + # Fetch current entities + _LOGGER.debug("Fetching current entities") + try: + current_entities = {} + for entity_id in self.hass.states.async_entity_ids(): + state = self.hass.states.get(entity_id) + if state is not None: + friendly_name = state.attributes.get('friendly_name', entity_id) + current_entities[entity_id] = { + 'state': state.state, + 'attributes': state.attributes, + 'last_changed': state.last_changed, + 'last_updated': state.last_updated, + 'friendly_name': friendly_name + } + except Exception as err: + _LOGGER.error("Error fetching entities: %s", err) + return self.data - # Fetch the list of current entities - current_entities = { - entity_id: self.hass.states.get(entity_id).as_dict() - for entity_id in self.hass.states.async_entity_ids() - } + # Detect newly added entities + new_entities = { + k: v for k, v in current_entities.items() + if k not in self.previous_entities + } - # Detect newly added entities - new_entities = { - k: v for k, v in current_entities.items() if k not in self.previous_entities - } + # Debug log the entities being processed + _LOGGER.debug("Found new entities: %s", list(new_entities.keys())) - # Limit the number of new entities to process - MAX_NEW_ENTITIES = 10 - total_new_entities = len(new_entities) - if total_new_entities > MAX_NEW_ENTITIES: - new_entities = dict(list(new_entities.items())[:MAX_NEW_ENTITIES]) + if not new_entities: + _LOGGER.debug("No new entities detected") + return self.data - ai_input_data = {"new_entities": new_entities} + # Log number of new entities found + _LOGGER.info("Found %d new entities", len(new_entities)) - suggestions = await self.hass.async_add_executor_job( - self.get_ai_suggestions, ai_input_data - ) + # Limit processing to 10 entities if needed + if len(new_entities) > 10: + _LOGGER.debug("Limiting to 10 entities for processing") + new_entities = dict(list(new_entities.items())[:10]) + + # Prepare AI input + ai_input_data = self.prepare_ai_input(new_entities) + _LOGGER.debug("Prepared AI input: %s", ai_input_data) - self.previous_entities = current_entities # Update previous_entities with current ones + # Get suggestions from AI + suggestions = await self.get_ai_suggestions(ai_input_data) + + if suggestions: + _LOGGER.debug("Received suggestions: %s", suggestions) + try: + await persistent_notification.async_create( + self.hass, + message=suggestions, + title="AI Automation Suggestions", + notification_id=f"ai_automation_suggestions_{current_time.timestamp()}" + ) + + self.data = { + "suggestions": suggestions, + "last_update": current_time, + "entities_processed": list(new_entities.keys()), + "provider": self.entry.data.get(CONF_PROVIDER, "unknown") + } + except Exception as err: + _LOGGER.error("Error creating notification: %s", err) + return self.data + else: + _LOGGER.warning("No valid suggestions received from AI") + self.data = { + "suggestions": "No suggestions available", + "last_update": current_time, + "entities_processed": [], + "provider": self.entry.data.get(CONF_PROVIDER, "unknown") + } - if suggestions: - async_create( - hass=self.hass, - title="AI Automation Suggestions", - message=suggestions, - notification_id="ai_automation_suggestions" + # Always update previous entities list + self.previous_entities = current_entities + + return self.data + + except Exception as err: + _LOGGER.error("Unexpected error in update: %s", err) + return self.data + + def prepare_ai_input(self, new_entities): + """Prepare the input data for AI processing.""" + _LOGGER.debug("Preparing AI input for %d entities", len(new_entities)) + + entities_description = [] + for entity_id, entity_data in new_entities.items(): + state = entity_data.get('state', 'unknown') + attributes = entity_data.get('attributes', {}) + friendly_name = entity_data.get('friendly_name', entity_id) + domain = entity_id.split('.')[0] + + # Enhanced entity description + description = ( + f"Entity: {entity_id}\n" + f"Friendly Name: {friendly_name}\n" + f"Domain: {domain}\n" + f"State: {state}\n" + f"Attributes: {attributes}\n" + f"Last Changed: {entity_data.get('last_changed', 'unknown')}\n" + f"Last Updated: {entity_data.get('last_updated', 'unknown')}\n" + f"---\n" ) + entities_description.append(description) + + prompt = ( + f"{self.SYSTEM_PROMPT}\n\n" + f"New entities discovered:\n" + f"{''.join(entities_description)}\n" + f"Please suggest detailed and specific automations for these entities, " + f"using their exact entity IDs in the suggestions." + ) + return prompt + + async def get_ai_suggestions(self, prompt): + """Get suggestions from the configured AI provider.""" + provider = self.entry.data.get(CONF_PROVIDER, "OpenAI") + _LOGGER.debug("Using AI provider: %s", provider) + + try: + if provider == "OpenAI": + return await self.process_with_openai(prompt) + elif provider == "Anthropic": + return await self.process_with_anthropic(prompt) + elif provider == "Google": + return await self.process_with_google(prompt) + elif provider == "Groq": + return await self.process_with_groq(prompt) + elif provider == "LocalAI": + return await self.process_with_localai(prompt) + elif provider == "Ollama": + return await self.process_with_ollama(prompt) + elif provider == "Custom OpenAI": + return await self.process_with_custom_openai(prompt) + else: + _LOGGER.error("Unknown provider: %s", provider) + return None + except Exception as err: + _LOGGER.error("Error getting suggestions: %s", err) + return None + + async def process_with_openai(self, prompt): + """Process the prompt with OpenAI.""" + try: + api_key = self.entry.data.get(CONF_OPENAI_API_KEY) + model = self.entry.data.get(CONF_OPENAI_MODEL, DEFAULT_MODELS["OpenAI"]) + if not api_key: + raise ValueError("OpenAI API key not configured") + + _LOGGER.debug("Making OpenAI API request with model %s", model) + + headers = { + "Content-Type": "application/json", + "Authorization": f"Bearer {api_key}" + } + + data = { + "model": model, + "messages": [ + {"role": "user", "content": prompt} + ], + "max_tokens": DEFAULT_MAX_TOKENS, + "temperature": DEFAULT_TEMPERATURE + } + + async with self.session.post( + ENDPOINT_OPENAI, + headers=headers, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("OpenAI API error: %s", error_text) + return None + + result = await response.json() + return result["choices"][0]["message"]["content"] - return suggestions + except Exception as err: + _LOGGER.error("Error processing with OpenAI: %s", err) + return None + async def process_with_anthropic(self, prompt): + """Process the prompt with Anthropic.""" + try: + api_key = self.entry.data.get(CONF_ANTHROPIC_API_KEY) + model = self.entry.data.get(CONF_ANTHROPIC_MODEL, DEFAULT_MODELS["Anthropic"]) + if not api_key: + raise ValueError("Anthropic API key not configured") + + _LOGGER.debug("Making Anthropic API request with model %s", model) + + headers = { + "Content-Type": "application/json", + "X-API-Key": api_key, + "anthropic-version": VERSION_ANTHROPIC + } + + data = { + "model": model, + "messages": [ + {"role": "user", "content": prompt} + ], + "max_tokens": DEFAULT_MAX_TOKENS, + "temperature": DEFAULT_TEMPERATURE + } + + async with self.session.post( + ENDPOINT_ANTHROPIC, + headers=headers, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("Anthropic API error: %s", error_text) + return None + + result = await response.json() + return result["content"][0]["text"] + except Exception as err: + _LOGGER.error("Error processing with Anthropic: %s", err) + return None - def get_ai_suggestions(self, ai_input_data): - """Process data with AI model.""" - use_local_ai = self.entry.data.get("use_local_ai", False) - if use_local_ai: - return self.local_ai_analysis(ai_input_data) - else: - return self.cloud_ai_analysis(ai_input_data) + async def process_with_google(self, prompt): + """Process the prompt with Google.""" + try: + api_key = self.entry.data.get(CONF_GOOGLE_API_KEY) + model = self.entry.data.get(CONF_GOOGLE_MODEL, DEFAULT_MODELS["Google"]) + if not api_key: + raise ValueError("Google API key not configured") - def local_ai_analysis(self, ai_input_data): - """Analyze data using a local AI model.""" - return "Local AI analysis is not yet implemented." + _LOGGER.debug("Making Google API request with model %s", model) + + headers = { + "Content-Type": "application/json", + } + + data = { + "prompt": { + "text": prompt + }, + "temperature": DEFAULT_TEMPERATURE, + "candidate_count": 1, + "max_output_tokens": DEFAULT_MAX_TOKENS + } + + endpoint = ENDPOINT_GOOGLE.format(model=model, api_key=api_key) + + async with self.session.post( + endpoint, + headers=headers, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("Google API error: %s", error_text) + return None + + result = await response.json() + return result["candidates"][0]["output"] - def cloud_ai_analysis(self, ai_input_data): - """Analyze data using the OpenAI ChatCompletion API.""" - import openai - api_key = self.entry.data.get("openai_api_key") - if not api_key: - _LOGGER.error("OpenAI API key is missing.") - return "OpenAI API key is missing." + except Exception as err: + _LOGGER.error("Error processing with Google: %s", err) + return None - openai.api_key = api_key - prompt = self.generate_prompt(ai_input_data) + async def process_with_groq(self, prompt): + """Process the prompt with Groq.""" try: - response = openai.chat.completions.create( - model="gpt-4o-mini", - messages=[ + api_key = self.entry.data.get(CONF_GROQ_API_KEY) + model = self.entry.data.get(CONF_GROQ_MODEL, DEFAULT_MODELS["Groq"]) + if not api_key: + raise ValueError("Groq API key not configured") + + _LOGGER.debug("Making Groq API request with model %s", model) + + headers = { + "Content-Type": "application/json", + "Authorization": f"Bearer {api_key}" + } + + data = { + "messages": [ { - "role": "system", - "content": ( - "You are an AI assistant that generates Home Assistant automations " - "based on the types of new entities discovered in the system. Your goal " - "is to provide detailed and useful automation suggestions tailored to " - "the specific types and functions of the entities, avoiding generic recommendations. " - "For each entity:\n" - "1. Understand its function (e.g., sensor, switch, light, climate control).\n" - "2. Consider its current state (e.g., 'on', 'off', 'open', 'closed', 'temperature').\n" - "3. Suggest automations based on common use cases for similar entities.\n" - "4. Avoid generic suggestions. Instead, provide detailed scenarios such as:\n" - "- 'If the front door sensor detects it is open for more than 5 minutes, send a notification.'\n" - "- 'If no motion is detected for 10 minutes, turn off all lights.'\n" - "- 'If the temperature sensor detects a rise above 25°C, turn on the air conditioner.'\n" - "5. Consider combining multiple entities to create context-aware automations." - ) - }, - {"role": "user", "content": prompt}, + "role": "user", + "content": [ + {"type": "text", "text": prompt} + ] + } ], - max_tokens=500, - n=1, - temperature=0.7, - ) - suggestions = response.choices[0].message.content.strip() - return suggestions - except Exception as e: - _LOGGER.error(f"Error communicating with OpenAI: {e}") - return f"Error communicating with OpenAI: {e}" - - def generate_prompt(self, ai_input_data): - """Generate prompt for AI model.""" - new_entities_list = [ - f"{entity_id}: {entity['state']}" - for entity_id, entity in ai_input_data['new_entities'].items() - ] - - MAX_ENTITIES = 10 - total_new_entities = len(new_entities_list) - if total_new_entities > MAX_ENTITIES: - new_entities_list = new_entities_list[:MAX_ENTITIES] - entities_info = f"{MAX_ENTITIES} of {total_new_entities} new entities" - else: - entities_info = f"{total_new_entities} new entities" + "model": model + } + + async with self.session.post( + ENDPOINT_GROQ, + headers=headers, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("Groq API error: %s", error_text) + return None + + result = await response.json() + return result["choices"][0]["message"]["content"] - prompt = ( - f"Analyze the following {entities_info} added to my Home Assistant setup and suggest potential automations:\n" - ) - prompt += "\n".join(new_entities_list) - prompt += "\n\nProvide the suggestions in a clear and concise manner." - return prompt + except Exception as err: + _LOGGER.error("Error processing with Groq: %s", err) + return None + + async def process_with_localai(self, prompt): + """Process the prompt with LocalAI.""" + try: + ip_address = self.entry.data.get(CONF_LOCALAI_IP_ADDRESS) + port = self.entry.data.get(CONF_LOCALAI_PORT) + https = self.entry.data.get(CONF_LOCALAI_HTTPS, False) + model = self.entry.data.get(CONF_LOCALAI_MODEL, DEFAULT_MODELS["LocalAI"]) + + if not ip_address or not port: + raise ValueError("LocalAI configuration incomplete") + + protocol = "https" if https else "http" + base_url = f"{protocol}://{ip_address}:{port}" + endpoint = ENDPOINT_LOCALAI.format(base_url=base_url) + + _LOGGER.debug("Making LocalAI API request to %s with model %s", endpoint, model) + + data = { + "model": model, + "messages": [ + {"role": "user", "content": prompt} + ], + "max_tokens": DEFAULT_MAX_TOKENS, + "temperature": DEFAULT_TEMPERATURE + } + + async with self.session.post( + endpoint, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("LocalAI API error: %s", error_text) + return None + + result = await response.json() + return result["choices"][0]["message"]["content"] + + except Exception as err: + _LOGGER.error("Error processing with LocalAI: %s", err) + return None + + async def process_with_ollama(self, prompt): + """Process the prompt with Ollama.""" + try: + ip_address = self.entry.data.get(CONF_OLLAMA_IP_ADDRESS) + port = self.entry.data.get(CONF_OLLAMA_PORT) + https = self.entry.data.get(CONF_OLLAMA_HTTPS, False) + model = self.entry.data.get(CONF_OLLAMA_MODEL, DEFAULT_MODELS["Ollama"]) + + if not ip_address or not port: + raise ValueError("Ollama configuration incomplete") + + protocol = "https" if https else "http" + base_url = f"{protocol}://{ip_address}:{port}" + endpoint = ENDPOINT_OLLAMA.format(base_url=base_url) + + _LOGGER.debug("Making Ollama API request to %s with model %s", endpoint, model) + + data = { + "model": model, + "messages": [ + {"role": "user", "content": prompt} + ], + "stream": False, + "options": { + "temperature": DEFAULT_TEMPERATURE, + "num_predict": DEFAULT_MAX_TOKENS + } + } + + async with self.session.post( + endpoint, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("Ollama API error: %s", error_text) + return None + + result = await response.json() + return result["message"]["content"] + + except Exception as err: + _LOGGER.error("Error processing with Ollama: %s", err) + return None + + async def process_with_custom_openai(self, prompt): + """Process the prompt with Custom OpenAI-compatible API.""" + try: + endpoint = self.entry.data.get(CONF_CUSTOM_OPENAI_ENDPOINT) + api_key = self.entry.data.get(CONF_CUSTOM_OPENAI_API_KEY) + model = self.entry.data.get(CONF_CUSTOM_OPENAI_MODEL, DEFAULT_MODELS["Custom OpenAI"]) + if not endpoint: + raise ValueError("Custom OpenAI endpoint not configured") + + _LOGGER.debug("Making Custom OpenAI API request to %s with model %s", endpoint, model) + + headers = { + "Content-Type": "application/json", + } + if api_key: + headers["Authorization"] = f"Bearer {api_key}" + + data = { + "model": model, + "messages": [ + {"role": "user", "content": prompt} + ], + "max_tokens": DEFAULT_MAX_TOKENS, + "temperature": DEFAULT_TEMPERATURE + } + + async with self.session.post( + endpoint, + headers=headers, + json=data + ) as response: + if response.status != 200: + error_text = await response.text() + _LOGGER.error("Custom OpenAI API error: %s", error_text) + return None + + result = await response.json() + return result["choices"][0]["message"]["content"] + + except Exception as err: + _LOGGER.error("Error processing with Custom OpenAI: %s", err) + return None diff --git a/custom_components/ai_automation_suggester/manifest.json b/custom_components/ai_automation_suggester/manifest.json index d5b33a4..ed84657 100644 --- a/custom_components/ai_automation_suggester/manifest.json +++ b/custom_components/ai_automation_suggester/manifest.json @@ -7,7 +7,12 @@ "documentation": "https://github.com/ITSpecialist111/ai_automation_suggester", "iot_class": "cloud_polling", "issue_tracker": "https://github.com/ITSpecialist111/ai_automation_suggester/issues", - "requirements": ["openai>=1.0.0,<2.0.0"], - "version": "1.06" -} - + "requirements": [ + "openai>=1.0.0,<2.0.0", + "anthropic>=0.8.0", + "aiohttp>=3.8.0", + "pyyaml>=6.0", + "voluptuous>=0.13.1" + ], + "version": "1.0.7" +} \ No newline at end of file diff --git a/custom_components/ai_automation_suggester/sensor.py b/custom_components/ai_automation_suggester/sensor.py index e1d299e..b197932 100644 --- a/custom_components/ai_automation_suggester/sensor.py +++ b/custom_components/ai_automation_suggester/sensor.py @@ -1,34 +1,243 @@ +# custom_components/ai_automation_suggester/sensor.py + """Sensor platform for AI Automation Suggester.""" -from homeassistant.components.sensor import SensorEntity -from homeassistant.helpers.update_coordinator import CoordinatorEntity +import logging +from homeassistant.components.sensor import ( + SensorEntity, + SensorEntityDescription, +) +from homeassistant.helpers.update_coordinator import ( + CoordinatorEntity, + DataUpdateCoordinator, +) +from homeassistant.helpers.entity import EntityCategory +from homeassistant.const import STATE_UNKNOWN +from homeassistant.core import callback + +from .const import ( + DOMAIN, + CONF_PROVIDER, + PROVIDER_STATUS_CONNECTED, + PROVIDER_STATUS_DISCONNECTED, + PROVIDER_STATUS_ERROR, +) -from .const import DOMAIN +_LOGGER = logging.getLogger(__name__) +SUGGESTION_SENSOR = SensorEntityDescription( + key="suggestions", + name="AI Automation Suggestions", + icon="mdi:robot", +) + +STATUS_SENSOR = SensorEntityDescription( + key="status", + name="AI Provider Status", + icon="mdi:check-network", + entity_category=EntityCategory.DIAGNOSTIC, +) async def async_setup_entry(hass, entry, async_add_entities): """Set up the sensor platform.""" coordinator = hass.data[DOMAIN][entry.entry_id] - async_add_entities([AISuggestionsSensor(coordinator)], True) + + entities = [ + AISuggestionsSensor( + coordinator=coordinator, + entry=entry, + description=SUGGESTION_SENSOR, + ), + AIProviderStatusSensor( + coordinator=coordinator, + entry=entry, + description=STATUS_SENSOR, + ), + ] + + async_add_entities(entities, True) + _LOGGER.debug("Sensor platform setup complete") class AISuggestionsSensor(CoordinatorEntity, SensorEntity): """Sensor to display AI suggestions.""" - def __init__(self, coordinator): + def __init__( + self, + coordinator: DataUpdateCoordinator, + entry, + description: SensorEntityDescription, + ) -> None: """Initialize the sensor.""" super().__init__(coordinator) - self._attr_name = "AI Automation Suggestions" - self._attr_unique_id = "ai_automation_suggestions_sensor" - self._attr_icon = "mdi:robot" + self.entity_description = description + self._attr_unique_id = f"{entry.entry_id}_{description.key}" + self._attr_device_info = { + "identifiers": {(DOMAIN, entry.entry_id)}, + "name": f"AI Automation Suggester ({entry.data.get(CONF_PROVIDER, 'unknown')})", + "manufacturer": "Community", + "model": entry.data.get(CONF_PROVIDER, "unknown"), + "sw_version": "1.0.7", + } + self._entry = entry + self._previous_suggestions = None + + @property + def name(self) -> str: + """Return the name of the sensor.""" + provider = self._entry.data.get(CONF_PROVIDER, "unknown") + return f"AI Automation Suggestions ({provider})" @property - def state(self): + def native_value(self) -> str: """Return the state of the sensor.""" - if self.coordinator.data: + if self.coordinator.data and self.coordinator.data.get("suggestions"): + if self.coordinator.data.get("suggestions") != self._previous_suggestions: + self._previous_suggestions = self.coordinator.data.get("suggestions") + return "New Suggestions Available" return "Suggestions Available" return "No Suggestions" @property - def extra_state_attributes(self): + def extra_state_attributes(self) -> dict: + """Return the state attributes.""" + if not self.coordinator.data: + return { + "suggestions": "No suggestions yet", + "last_update": None, + "entities_processed": [], + "provider": self._entry.data.get(CONF_PROVIDER, "unknown"), + } + + return { + "suggestions": self.coordinator.data.get("suggestions", "No suggestions"), + "last_update": self.coordinator.data.get("last_update", None), + "entities_processed": self.coordinator.data.get("entities_processed", []), + "provider": self._entry.data.get(CONF_PROVIDER, "unknown"), + } + + @property + def available(self) -> bool: + """Return True if entity is available.""" + return True + + @callback + def _handle_coordinator_update(self) -> None: + """Handle updated data from the coordinator.""" + if ( + self.coordinator.data and + self.coordinator.data.get("suggestions") != self._previous_suggestions + ): + self._previous_suggestions = self.coordinator.data.get("suggestions") + self._attr_native_value = "New Suggestions Available" + self.async_write_ha_state() + + async def async_added_to_hass(self) -> None: + """Run when entity is added to registry.""" + await super().async_added_to_hass() + _LOGGER.debug("Suggestions sensor added to registry") + + +class AIProviderStatusSensor(CoordinatorEntity, SensorEntity): + """Sensor to display provider status.""" + + def __init__( + self, + coordinator: DataUpdateCoordinator, + entry, + description: SensorEntityDescription, + ) -> None: + """Initialize the sensor.""" + super().__init__(coordinator) + self.entity_description = description + self._attr_unique_id = f"{entry.entry_id}_{description.key}" + self._attr_device_info = { + "identifiers": {(DOMAIN, entry.entry_id)}, + "name": f"AI Automation Suggester ({entry.data.get(CONF_PROVIDER, 'unknown')})", + "manufacturer": "Community", + "model": entry.data.get(CONF_PROVIDER, "unknown"), + "sw_version": "1.0.7", + } + self._entry = entry + self._attr_native_value = STATE_UNKNOWN + self._last_error = None + + @property + def name(self) -> str: + """Return the name of the sensor.""" + provider = self._entry.data.get(CONF_PROVIDER, "unknown") + return f"AI Provider Status ({provider})" + + def _get_provider_status(self) -> str: + """Determine the current status of the provider.""" + if not self.coordinator.last_update: + return PROVIDER_STATUS_DISCONNECTED + + try: + # Check if last update was successful + if ( + self.coordinator.data + and isinstance(self.coordinator.data, dict) + and "suggestions" in self.coordinator.data + ): + # Check if there was an error in the last suggestion generation + if self._last_error: + status = PROVIDER_STATUS_ERROR + else: + status = PROVIDER_STATUS_CONNECTED + return status + + return PROVIDER_STATUS_ERROR + + except Exception as err: + self._last_error = str(err) + _LOGGER.error("Error determining provider status: %s", err) + return PROVIDER_STATUS_ERROR + + @property + def native_value(self) -> str: + """Return the state of the sensor.""" + return self._get_provider_status() + + @property + def extra_state_attributes(self) -> dict: """Return the state attributes.""" - return {"suggestions": self.coordinator.data} + status = self._get_provider_status() + last_error = self._last_error if status == PROVIDER_STATUS_ERROR else None + + return { + "provider": self._entry.data.get(CONF_PROVIDER, "unknown"), + "last_update": self.coordinator.last_update, + "status_details": { + PROVIDER_STATUS_CONNECTED: "Provider is connected and functioning normally", + PROVIDER_STATUS_DISCONNECTED: "Provider is disconnected or waiting for first update", + PROVIDER_STATUS_ERROR: "Provider encountered an error in last update", + }.get(status, "Unknown status"), + "last_error": last_error, + "model": self._entry.data.get("model", "default"), + } + + @property + def available(self) -> bool: + """Return True if entity is available.""" + return True + + @property + def icon(self) -> str: + """Return the icon based on status.""" + status = self._get_provider_status() + return { + PROVIDER_STATUS_CONNECTED: "mdi:check-circle", + PROVIDER_STATUS_DISCONNECTED: "mdi:circle-off-outline", + PROVIDER_STATUS_ERROR: "mdi:alert-circle", + }.get(status, "mdi:help-circle") + + @callback + def _handle_coordinator_update(self) -> None: + """Handle updated data from the coordinator.""" + self._attr_native_value = self._get_provider_status() + self.async_write_ha_state() + + async def async_added_to_hass(self) -> None: + """Run when entity is added to registry.""" + await super().async_added_to_hass() + _LOGGER.debug("Provider status sensor added to registry") diff --git a/custom_components/ai_automation_suggester/services.yaml b/custom_components/ai_automation_suggester/services.yaml index 9cdc50e..abcf1f6 100644 --- a/custom_components/ai_automation_suggester/services.yaml +++ b/custom_components/ai_automation_suggester/services.yaml @@ -1,4 +1,21 @@ +# custom_components/ai_automation_suggester/services.yaml + generate_suggestions: name: Generate Suggestions description: "Manually trigger AI automation suggestions." - fields: {} + fields: + provider_config: + name: Provider Configuration + description: Which provider configuration to use (if you have multiple configured) + required: false + selector: + config_entry: + integration: ai_automation_suggester + custom_prompt: + name: Custom Prompt + description: Optional custom prompt to override the default system prompt + required: false + example: "Generate automations focusing on energy savings and security" + selector: + text: + multiline: true diff --git a/custom_components/ai_automation_suggester/strings.json b/custom_components/ai_automation_suggester/strings.json index 2ef2d4c..70c63e7 100644 --- a/custom_components/ai_automation_suggester/strings.json +++ b/custom_components/ai_automation_suggester/strings.json @@ -5,16 +5,13 @@ "user": { "title": "Configure AI Automation Suggester", "data": { - "scan_frequency": "Scan Frequency (hours, set to 0 to disable)", - "initial_lag_time": "Initial Lag Time (minutes)", - "use_local_ai": "Use Local AI", - "openai_api_key": "OpenAI API Key" + "provider": "AI Provider" } } }, "error": { "required": "This field is required.", - "invalid_api_key": "The OpenAI API key is invalid." + "invalid_api_key": "The API key is invalid." } }, "services": { diff --git a/custom_components/ai_automation_suggester/translations/de.json b/custom_components/ai_automation_suggester/translations/de.json new file mode 100644 index 0000000..95e9296 --- /dev/null +++ b/custom_components/ai_automation_suggester/translations/de.json @@ -0,0 +1,56 @@ +{ + "config": { + "step": { + "user": { + "title": "AI Automation Suggester konfigurieren", + "description": "Wählen Sie Ihren KI-Anbieter und konfigurieren Sie die Einstellungen", + "data": { + "provider": "KI-Anbieter", + "scan_frequency": "Scan-Häufigkeit (Stunden)", + "initial_lag_time": "Anfangsverzögerung (Minuten)" + } + }, + "openai": { + "title": "OpenAI Konfiguration", + "data": { + "api_key": "API-Schlüssel", + "model": "Modell" + } + }, + "anthropic": { + "title": "Anthropic Konfiguration", + "data": { + "api_key": "API-Schlüssel", + "model": "Modell" + } + }, + "localai": { + "title": "LocalAI Konfiguration", + "data": { + "ip_address": "IP-Adresse", + "port": "Port", + "https": "HTTPS verwenden", + "model": "Modellname" + } + }, + "ollama": { + "title": "Ollama Konfiguration", + "data": { + "ip_address": "IP-Adresse", + "port": "Port", + "https": "HTTPS verwenden", + "model": "Modellname" + } + } + }, + "error": { + "cannot_connect": "Verbindung zum Dienst fehlgeschlagen", + "invalid_auth": "Ungültige Authentifizierung", + "invalid_config": "Ungültige Konfiguration", + "unknown": "Unerwarteter Fehler", + "no_entities": "Keine neuen Entitäten gefunden", + "api_error": "API-Fehler aufgetreten", + "required_field": "Dieses Feld ist erforderlich" + } + } +} \ No newline at end of file diff --git a/custom_components/ai_automation_suggester/translations/en.json b/custom_components/ai_automation_suggester/translations/en.json index dd7b6a8..654154d 100644 --- a/custom_components/ai_automation_suggester/translations/en.json +++ b/custom_components/ai_automation_suggester/translations/en.json @@ -4,15 +4,48 @@ "user": { "title": "Configure AI Automation Suggester", "data": { - "scan_frequency": "Scan Frequency (hours)", - "initial_lag_time": "Initial Lag Time (minutes)", - "use_local_ai": "Use Local AI", - "openai_api_key": "OpenAI API Key" + "provider": "AI Provider" + } + }, + "provider_config": { + "title": "Provider Settings", + "data": { + "model": "Model Name", + "api_key": "API Key", + "ip_address": "IP Address", + "port": "Port", + "use_https": "Use HTTPS" } } }, "error": { - "required": "This field is required." + "cannot_connect": "Failed to connect to service", + "invalid_auth": "Invalid authentication", + "invalid_config": "Invalid configuration", + "unknown": "Unexpected error", + "no_entities": "No new entities found", + "api_error": "API error occurred", + "required_field": "This field is required" + }, + "abort": { + "already_configured": "Provider is already configured", + "provider_not_supported": "This provider is not currently supported" + } + }, + "services": { + "generate_suggestions": { + "name": "Generate Suggestions", + "description": "Manually trigger AI automation suggestions", + "fields": { + "provider_config": { + "name": "Provider Configuration", + "description": "Which provider configuration to use (if you have multiple)" + }, + "custom_prompt": { + "name": "Custom Prompt", + "description": "Optional custom prompt to override the default system prompt" + } + } } } } diff --git a/hacs.json b/hacs.json index 87889c4..7e34175 100644 --- a/hacs.json +++ b/hacs.json @@ -1,5 +1,7 @@ { - "name": "AI Automation Suggester", - "content_in_root": false, - "homeassistant": "2024.10.2" - } \ No newline at end of file + "name": "AI Automation Suggester", + "content_in_root": false, + "render_readme": true, + "homeassistant": "2024.1.0", + "hacs": "1.26.0" +} \ No newline at end of file