The UIAutomation activities package contains all the basic activities used for creating automation projects.
❗️ 重要
使用用户界面自动化活动的自动化流程无法在屏幕锁定时运行。
📘 备注:
Starting with v2020.10, the UIAutomationNext package has been deprecated and the existing UIAutomation package has been expanded to include all the modern features previously available in UiAutomationNext. You are also able to install the unified UIAutomation activities package even on Studio versions 2020.4.1 and lower. This displays all the classic and modern activities in the activities pane. Read more about the Modern Design Experience.
Starting with UiPath.UIAutomation.Activities v19.11, all Abbyy related activities have been moved to a separate package. Install the UiPath.Abbyy.Activities package if you want to use its activities for OCR, Cloud OCR, classification, and data extraction.
These activities enable the robots to:
模拟人机交互(如执行鼠标和键盘命令或键入和提取文本),以实现基本的用户界面自动化。 Use technologies such as OCR or Image recognition to perform Image and Text Automation. 基于用户界面行为创建触发条件,从而使机器人能够在机器发生特定事件时执行某些操作。 执行浏览器交互和窗口操作。 📘 备注:
The UiPath.Vision dependency package includes third-party libraries. These external dependencies are used exclusively for the purpose of enabling the implementation of specific activities in the UiPath.UIAutomation.Activities package.
Here are some examples: AbbyyOnlineSdk.dll - used exclusively in the Abbyy Cloud OCR activity, at run-time, as a wrapper over the Abbyy online service calls. Interop.FREngine.v11.dll - used exclusively in the Abbyy OCR activity, at run-time, as a wrapper over the Abby FineReader Engine calls. Interop.MODI.dll - used exclusively in the Microsoft OCR activity, at run-time, when executed on a Windows 7 or Windows Server machine.
As of v2018.3, the UiPath.Core.Activities package was split into the UIAutomation and System packs. Find out more about the Core Activities Split.
某些特定情形可能需要管理严格的“用户界面自动化”依赖项版本。例如,用户必须根据 UiPath.Vision 版本手动安装 Tesseract OCR 引擎的语言。换言之,对于使用该语言的进程,您需要使用相应的“用户界面自动化活动包”。您可以在此页面了解详细信息。
“用户界面自动化活动包”中包含内部开发的以下依赖项:
UiPath.Vision - 支持 OCR 和计算机视觉引擎的功能。 UiPath -“用户界面自动化”活动的必备库。 Release Notes
RPA之家(www.rpazj.com/)是中国具有影响力的R…
依赖项 下表列出了每个版本的 UiPath.UIAutomation.Activities 包附带的依赖项:
UiPath.UIAutomation.Activities UiPath.Vision UiPath 18.3.6877.28298 1.1.0 9.0.6877.24355 18.3.6897.22543 1.1.0 9.0.6893.27943 18.3.6962.28967 1.1.0 9.0.6962.24417 18.4.2 1.2.0 10.0.6913.22031 18.4.3 1.2.0 10.0.6929.25268 18.4.4 1.2.0 10.0.6992.20526 18.4.5 1.2.0 10.0.7020.22745 18.4.6 1.2.0 10.0.7194.26789 18.4.7 1.2.1 10.0.7445.17204 19.1.0 1.2.0 10.0.6957.21531 19.2.0 1.3.0 10.0.6957.21531 19.3.0 1.4.0 10.0.7004.31775 19.4.1 1.5.0 19.4.7054.14370 19.4.2 1.5.0 19.4.7068.19937 19.5.0 1.6.0 19.5.7079.28746 19.6.0 1.6.0 19.6.7108.25473 19.7.0 1.6.1 19.7.7128.27029 19.8.0-ce 1.7.0 19.8.7173.31251-ce 19.10.0-ce 1.8.1 19.10.7230.26901 19.10.1 1.8.1 19.10.7243.31457 19.11.0 2.0.0 19.10.7275.19994 19.11.1 2.0.0 19.10.7312.25504 19.11.2 2.0.1 19.10.7312.25504 19.11.3 2.0.1 19.10.7452.28108 19.11.4 2.0.1 19.10.7601.15369 19.11.5 2.0.1 19.10.7601.15369 20.4.1 2.0.3 20.4.7422.14731 20.4.2 2.0.3 20.4.7472.17184 20.4.3 2.0.3 20.4.7537.15740 20.10.5 2.2.0 20.10.7585.27318 20.10.6 2.2.0 20.10.7585.27318 20.10.7 2.2.0 20.10.7641.24102 20.10.8 2.2.0 20.10.7641.24102 20.10.9 2.2.0 20.10.7641.24102 20.10.10 2.2.0 20.10.7810.17763 21.4.3 3.0.1 21.4.23.31065 21.4.4 3.0.1 21.4.25.3292 21.10.3 3.1.4 21.10.30.58966 计算机视觉 📘 重要
Due to the fact that the Computer Vision activities have moved to the UIAutomation pack in 19.10, Installing the UIAutomation v19.10.1 pack in a project that already contains a version of the Computer Vision pack throws an error.
🚧 备注:
The Computer Vision activities are not compatible with Windows 7.
The AI Computer Vision pack contains refactored fundamental UIAutomation activities such as Click, Type Into, or Get Text. The main difference between the CV activities and their classic counterparts is their usage of the Computer Vision neural network developed in-house by our Machine Learning department. The neural network is able to identify UI elements such as buttons, text input fields, or check boxes without the use of selectors.
我们创建这些活动主要是为了实现 Citrix 机器等虚拟桌面环境的自动化,协助避开选取器不存在或不可靠的问题。因为它们会将正在自动化的窗口的图像发送到神经网络,然后由神经网络对图像进行分析,识别图像上的所有用户界面元素并根据具体内容进行标记。智能锚点用于精确定位正与您交互的用户界面元素的确切位置,确保您成功执行所要执行的操作。
To use the Computer Vision activities in the current project, you need an ApiKey which can be obtained from the Automation Cloud as detailed here. The ApiKey must then be inserted in the ApiKey field in the Computer Vision Project Settings property category. You can see the Project Settings page for more information.
📘 备注:
与服务器连接相关的设置适用于整个项目,对此,所有后续“计算机视觉屏幕作用域”****活动中均有所体现。
使用“计算机视觉”活动 此活动包中的所有活动仅适用于“计算机视觉屏幕作用域”**活动,该活动会建立与神经网络服务器的实际连接,从而便于您分析要自动化的应用程序用户界面。任何使用“计算机视觉”活动的工作流均须先将“计算机视觉屏幕作用域”**活动拖动至“设计器”**面板。拖动完成后,您便可使用作用域活动主体中的“在屏幕上指定”**按钮来选择要使用的屏幕区域。
📘 备注:
双击信息截图后,系统会显示已捕获的图像,并会以紫色高亮显示神经网络和 OCR 引擎识别出的所有用户界面元素。
📘 备注:
区域选择也可用于仅指定要自动化的部分应用程序用户界面。对于多个文本字段具有相同标签且无法正确识别的情况,该功能尤为有用。
Once a CV Screen Scope activity is properly configured, you can start using all of the other activities in the pack to build your automation.
Indicating On Screen The activities that perform actions on UI elements can be configured at design time by using the Indicate On Screen button present in the body of the activities. The activities that have this feature are:
CV Click CV Element Exists CV Get Text CV Highlight CV Hover CV Type Into Clicking the Indicate On Screen (hotkey: I) button opens the helper wizard.
The CV Click, CV Hover, and CV Type Into activities also feature a Relative To button in the helper wizard, which enables you to configure the target as being relative to an element.
The Indicate field specifies what you are indicating at the moment. When the helper is opened for the first time, the Target needs to be indicated. For each possible target, the wizard automatically selects an anchor, if one is available.
The Computer Vision activities also offer support for indicating tables. Targeting in tables can be done by selecting a cell you want to interact with, which prompts the neural network to automatically identify the column and the row that define the position of that cell, displaying them in a grid.
By default, the names of the column and row are used in the descriptor to pinpoint the location of the cell. However, in the case of dynamic tables, you can hold down the Shift key and click the column and row indexes to use those in your descriptor. This might be useful in situations where column and row names are changed, but you want to extract the same position of a cell.
After successfully indicating the Target, the wizard closes and the activity is configured with the target you selected.
If no unique anchor is automatically identified, the Indicate field informs you of this fact, enabling you to indicate additional Anchors, which make the target easier to find.
The Show Elements (hotkey: s) button in the wizard highlights all UI elements that have been identified by the Computer Vision analysis, making it easier for you to choose what to interact with.
The Refresh Scope (hotkey: F5) button can be used at design time, in case something changes in the target app, enabling you to send a new picture to the CV server to be analyzed again.
The Refresh After Delay (hotkey: F2) button performs a refresh of the target app after waiting 3 seconds.
📘 重要
请记住,每当您选择提交神经网络的行为错误时,您都是在帮助它学习,同时也在间接协助我们为您提供更优质的产品。请尽量提交更多问题,以便我们有机会了解并修复问题。