Category: Blog

  • flutter_mobile_command_tools

    MobileTool

    超级方便的adb命令工具,支持所有桌面端,不管你是开发还是测试,都可以试试看。

    说明

    • 关于Android

      请自行打开手机开发者模式中的USB调试,确保手机和电脑能连接上。确保能使用adb连接上。。本工具Android模块只是将adb的大部分命令进行了懒人模式,有问题欢迎提issues。adb命令参考

    • 关于IOS

      使用libimobiledevice,IOS意义不是很大。就写了几个小功能。还是爱思比较香。

    • 关于配置文件和工具

      • 本地文件路径

        1. Windows:C:\Users\用户名\Documents\MobileTools
        2. Linux:/home/用户名/Documents/MobileTools
        3. Windows:/Users/用户名/Documents/MobileTools
      • MobileTools的目录结构

        1. apksigner文件夹(签名文件)
        2. config文件夹(用于保存一些信息)
        3. tools文件夹(包含adb、反编译一些本地文件)
          • apktool文件夹(存放apktool.jar文件和FakerAndroid.jar,去云盘取)
          • uiautomatorviewer文件夹(存放获取焦点工具,去云盘取)
          • 一些adb以及fastboot文件
        4. SETTING(本地路径的设置文件)
        5. VERSION文件(当前软件的版本号)

      如果需要使用反编译,以及获取当前界面的焦点的工具,几个工具太大。保存到了百度云盘,需要的可以放到tools文件夹下面。链接,提取码:xjwr。

    功能

    设置

    • adb(选择本机的adb文件,以防止和内部adb冲突)
    • java(部分命令需要java环境,如果你不想配置环境变量,可以选择java文件)
    • libimobiledevice(IOS的环境,感觉用处不是很大)

    Android

    • 开启Root 如果手机有Root权限,可以打开,在获取信息的时候使用到。如果手机有Magisk,可以安装这个插件adb_root,可以让所有的命令都走root权限。

    • 内置ADB 如果你的电脑没有adb,打开这个开关会使用内置的adb。如果你电脑本身有adb,点击右上角的配置,配置adb路径,以免内置的adb和你安装的adb冲突。

    • 基本操作

      • 获取设备 获取当前所有连接的Android设备,展示在下拉框里面(如果当前只有单一设备,也可以不获取)
      • 获取设备信息 选择,然后点击获取信息,部分信息在高版本的手机上面需要Root权限
      • 自定义adb命令(3.0新增) 本软件没有涉及带的命令,可以添加保存,下次使用
      • 自定义其他命令(3.0新增) 相关其他终端命令,可以添加保存,下次使用
    • 无线连接

      • 无线连接 选择真机,非自定义的情况下会去获取当前真机的ip,获取成功直接去连接,获取失败,需要自定义去填入ip:port。选择其他模拟器设备,默认内置了所有模拟器的第一台设备的端口。然后点击无线连接就ok了。
      • 断开 只能断开无线连接的设备和模拟器
    • 应用管理

      • 当前包名 获取当前展示的app包名,展示在上面的下拉框里面。
      • 冻结包名(3.0新增) 获取所有冻结的app包名,展示在上面的下拉框里面。
      • 第三方包名(2.0新增) 获取当前所有第三方的app包名,展示在上面的下拉框里面。
      • 系统包名(2.0新增) 获取当前所有系统的app包名,展示在上面的下拉框里面。
      • 冷冻(3.0新增) 对当前选择的包名对应的apk进行冷冻
      • 解冻(3.0新增) 先获取所有冻结的包名,然后选择包名,进行解冻
      • 安装apk 选择本地的apk文件安装到手机上面
      • 卸载apk 卸载当前获取到包名的apk。
      • 主Activity(3.0新增) 获取当前包名的启动Activity类名。
      • 当前Activity(3.0新增) 当前正在展示的Activity类名。
      • app包信息(2.0新增) 当前获取到包名的app信息,可以复制部分信息为应用交互做准备。
      • apk安装路径 当前获取到包名的app路径。
      • 清除数据 清除当前获取到包名的缓存数据。
    • 应用信息(3.0新增)

      • 内部包名和外部apk 选择内部包名需要先获取包名,然后点击下面的按钮,选择外部apk,点击下面的按钮会弹窗让你选择apk
      • apk包信息 获取app的包信息(包含app包名、app名字、app版本、app启动类)
      • apk权限 获取apk需要的权限信息
    • 应用交互(2.0新增)

      以下3.0版本都对其进行了本地保存,可以自行添加,以供下次使用。存储在config文件夹下面。

      • 启动Activity 弹窗输入要启动的Activity名字,如果没有输入将启动当前获取包名的app。(关于启动类可以通过主Activity包信息获取)
      • 发送BroadcastReceiver 弹窗输入要启动的广播,下面也列出了部分系统广播,用于测试很难出现的广播。
      • 发送Service 弹出输入要启动的Service
      • 停止Service 弹出输入要通知的Service
    • 文件管理

      • 推送文件 选择文件推送到当前设备,默认推送位置/data/local/tmp。点击自定义路径,可以输入你想推送的路径。
      • 拉取文件 从当前设备拉取文件到桌面。
        1. 手机crash 点击手机crash,将收集所有crash日志,展示出来,然后选择时间点点击拉取crash。会推送到桌面
        2. 拉取文件 只是为了拉取文件。需要先配置搜索的文件路径,然后点击搜索,会搜索该路径下的所有文件。然后再点击拉取文件。也会推送到桌面。
        3. 拉取anr 直接点击,会直接拉取anr日志到桌面(时间有点长,耐心等待)
    • 模拟操作 你可以使用大部分模拟命令。

      • 打开获取焦点工具(3.0新增,需要java环境。需要从云盘获取工具放到tools文件夹)
      • 添加指令文件 支持4类指令。滑动、点击、文本、所有按键(参考adb_simulate_code.txt文件)
      • 刷新指令文件(3.0新增) 修改之后。可以直接刷新指令,直接使用
      • 执行指令
        用户执行指令的按钮
      • 停止指令 只有在开启循环时有效。表示停止执行循环
    • 逆向相关(3.0新增,需要java环境。需要从云盘获取工具放到tools文件夹)

      • Apktool拆包 使用apktool进行拆包。详情见Apktool
      • ApkTool合包 使用apktook进行合包。详情见Apktool
      • FakerAndroid 使用FakerAndroid进行拆包可以二次开发的gradle项目。详情见FakerAndroid
    • 刷机相关

      • 重启手机 重新启动手机
      • 重启到fastboot 重启手机到fastboot模式
      • 重启到recovery 重启手机到recovery模式
    • 实用操作

      • 截屏(2.0修改) 截取当前设备的界面,并且推送到桌面(命名 当前时间.png)
      • 录屏(2.0修改) 录取当前屏幕,需要先设置时间,完成后推送到桌面(命名 当前时间.mp4)
      • v2签名 使用apksigner的签名。可以进行替换,保证文件名一样。apksigner.json为签名的key以及密码。替换记得修改。
      • 前面校验 校验apk的签名信息

    IOS

    IOS意义不是很大,简单写了几个命令。要下itunes,还有下面的工具,提供获取设备,获取包名,安装和卸载ipa。直接用爱思吧。

    编译

    所有平台应用都改成了占当前屏幕的2/3,采用居中显示,linux没有居中,GTK没搞过。

    • windows

      安装Visual Studio,c++桌面包。
      flutter build windows  //进行编译。
      在build/windows/runner 会生成Visual Studio的解决方案工程,可以导入进行开发。
      生成的exe在build/windows/runner/Release/*.exe
      
    • linux

      //linux需要安装以下依赖
      sudo apt-get update
      sudo apt install clang
      sudo apt install cmake
      sudo apt install ninja-build
      sudo apt install libgtk-3-dev
      
      
      file INSTALL cannot copy file  //出现这个问题
      flutter clean  //执行这个然后重启AndroidStudio
      
      flutter build linux //生成release包,文件在build/linux/release/bundle下面
      
      使用adb出现adb devices => no permissions (user in plugdev group; are your udev rules wrong?) [duplicate]
      参考地址解决:https://stackoverflow.com/questions/53887322/adb-devices-no-permissions-user-in-plugdev-group-are-your-udev-rules-wrong
      
      
    • macos

      安装Xcode,然后在编译的时候遇到很多小问题。然后百度解决了,其中一个
      [tool_crash] Invalid argument(s): Cannot find executable for /Users/imac/Documents/FlutterSDK/flutter/bin/cache/artifacts
      解决方案:https://github.com/flutter/flutter/issues/85107
      
      flutter build macos //生成release包,文件在build/macos/Build/Products/Release/下面
      将mac目录下的文件倒入xcode可进行开发
      

    截图展示

    • windows(1920*1080) screenshots/windows.png

    • linux (1920*1080) screenshots/linux.png

    • macos (1440*960) screenshots/macos.png

    其他

    Visit original content creator repository https://github.com/LuckyLi706/flutter_mobile_command_tools
  • flutter_mobile_command_tools

    MobileTool

    超级方便的adb命令工具,支持所有桌面端,不管你是开发还是测试,都可以试试看。

    说明

    • 关于Android

      请自行打开手机开发者模式中的USB调试,确保手机和电脑能连接上。确保能使用adb连接上。。本工具Android模块只是将adb的大部分命令进行了懒人模式,有问题欢迎提issues。adb命令参考

    • 关于IOS

      使用libimobiledevice,IOS意义不是很大。就写了几个小功能。还是爱思比较香。

    • 关于配置文件和工具

      • 本地文件路径

        1. Windows:C:\Users\用户名\Documents\MobileTools
        2. Linux:/home/用户名/Documents/MobileTools
        3. Windows:/Users/用户名/Documents/MobileTools
      • MobileTools的目录结构

        1. apksigner文件夹(签名文件)
        2. config文件夹(用于保存一些信息)
        3. tools文件夹(包含adb、反编译一些本地文件)
          • apktool文件夹(存放apktool.jar文件和FakerAndroid.jar,去云盘取)
          • uiautomatorviewer文件夹(存放获取焦点工具,去云盘取)
          • 一些adb以及fastboot文件
        4. SETTING(本地路径的设置文件)
        5. VERSION文件(当前软件的版本号)

      如果需要使用反编译,以及获取当前界面的焦点的工具,几个工具太大。保存到了百度云盘,需要的可以放到tools文件夹下面。链接,提取码:xjwr。

    功能

    设置

    • adb(选择本机的adb文件,以防止和内部adb冲突)
    • java(部分命令需要java环境,如果你不想配置环境变量,可以选择java文件)
    • libimobiledevice(IOS的环境,感觉用处不是很大)

    Android

    • 开启Root 如果手机有Root权限,可以打开,在获取信息的时候使用到。如果手机有Magisk,可以安装这个插件adb_root,可以让所有的命令都走root权限。

    • 内置ADB 如果你的电脑没有adb,打开这个开关会使用内置的adb。如果你电脑本身有adb,点击右上角的配置,配置adb路径,以免内置的adb和你安装的adb冲突。

    • 基本操作

      • 获取设备 获取当前所有连接的Android设备,展示在下拉框里面(如果当前只有单一设备,也可以不获取)
      • 获取设备信息 选择,然后点击获取信息,部分信息在高版本的手机上面需要Root权限
      • 自定义adb命令(3.0新增) 本软件没有涉及带的命令,可以添加保存,下次使用
      • 自定义其他命令(3.0新增) 相关其他终端命令,可以添加保存,下次使用
    • 无线连接

      • 无线连接 选择真机,非自定义的情况下会去获取当前真机的ip,获取成功直接去连接,获取失败,需要自定义去填入ip:port。选择其他模拟器设备,默认内置了所有模拟器的第一台设备的端口。然后点击无线连接就ok了。
      • 断开 只能断开无线连接的设备和模拟器
    • 应用管理

      • 当前包名 获取当前展示的app包名,展示在上面的下拉框里面。
      • 冻结包名(3.0新增) 获取所有冻结的app包名,展示在上面的下拉框里面。
      • 第三方包名(2.0新增) 获取当前所有第三方的app包名,展示在上面的下拉框里面。
      • 系统包名(2.0新增) 获取当前所有系统的app包名,展示在上面的下拉框里面。
      • 冷冻(3.0新增) 对当前选择的包名对应的apk进行冷冻
      • 解冻(3.0新增) 先获取所有冻结的包名,然后选择包名,进行解冻
      • 安装apk 选择本地的apk文件安装到手机上面
      • 卸载apk 卸载当前获取到包名的apk。
      • 主Activity(3.0新增) 获取当前包名的启动Activity类名。
      • 当前Activity(3.0新增) 当前正在展示的Activity类名。
      • app包信息(2.0新增) 当前获取到包名的app信息,可以复制部分信息为应用交互做准备。
      • apk安装路径 当前获取到包名的app路径。
      • 清除数据 清除当前获取到包名的缓存数据。
    • 应用信息(3.0新增)

      • 内部包名和外部apk 选择内部包名需要先获取包名,然后点击下面的按钮,选择外部apk,点击下面的按钮会弹窗让你选择apk
      • apk包信息 获取app的包信息(包含app包名、app名字、app版本、app启动类)
      • apk权限 获取apk需要的权限信息
    • 应用交互(2.0新增)

      以下3.0版本都对其进行了本地保存,可以自行添加,以供下次使用。存储在config文件夹下面。

      • 启动Activity 弹窗输入要启动的Activity名字,如果没有输入将启动当前获取包名的app。(关于启动类可以通过主Activity包信息获取)
      • 发送BroadcastReceiver 弹窗输入要启动的广播,下面也列出了部分系统广播,用于测试很难出现的广播。
      • 发送Service 弹出输入要启动的Service
      • 停止Service 弹出输入要通知的Service
    • 文件管理

      • 推送文件 选择文件推送到当前设备,默认推送位置/data/local/tmp。点击自定义路径,可以输入你想推送的路径。
      • 拉取文件 从当前设备拉取文件到桌面。
        1. 手机crash 点击手机crash,将收集所有crash日志,展示出来,然后选择时间点点击拉取crash。会推送到桌面
        2. 拉取文件 只是为了拉取文件。需要先配置搜索的文件路径,然后点击搜索,会搜索该路径下的所有文件。然后再点击拉取文件。也会推送到桌面。
        3. 拉取anr 直接点击,会直接拉取anr日志到桌面(时间有点长,耐心等待)
    • 模拟操作 你可以使用大部分模拟命令。

      • 打开获取焦点工具(3.0新增,需要java环境。需要从云盘获取工具放到tools文件夹)
      • 添加指令文件 支持4类指令。滑动、点击、文本、所有按键(参考adb_simulate_code.txt文件)
      • 刷新指令文件(3.0新增) 修改之后。可以直接刷新指令,直接使用
      • 执行指令
        用户执行指令的按钮
      • 停止指令 只有在开启循环时有效。表示停止执行循环
    • 逆向相关(3.0新增,需要java环境。需要从云盘获取工具放到tools文件夹)

      • Apktool拆包 使用apktool进行拆包。详情见Apktool
      • ApkTool合包 使用apktook进行合包。详情见Apktool
      • FakerAndroid 使用FakerAndroid进行拆包可以二次开发的gradle项目。详情见FakerAndroid
    • 刷机相关

      • 重启手机 重新启动手机
      • 重启到fastboot 重启手机到fastboot模式
      • 重启到recovery 重启手机到recovery模式
    • 实用操作

      • 截屏(2.0修改) 截取当前设备的界面,并且推送到桌面(命名 当前时间.png)
      • 录屏(2.0修改) 录取当前屏幕,需要先设置时间,完成后推送到桌面(命名 当前时间.mp4)
      • v2签名 使用apksigner的签名。可以进行替换,保证文件名一样。apksigner.json为签名的key以及密码。替换记得修改。
      • 前面校验 校验apk的签名信息

    IOS

    IOS意义不是很大,简单写了几个命令。要下itunes,还有下面的工具,提供获取设备,获取包名,安装和卸载ipa。直接用爱思吧。

    编译

    所有平台应用都改成了占当前屏幕的2/3,采用居中显示,linux没有居中,GTK没搞过。

    • windows

      安装Visual Studio,c++桌面包。
      flutter build windows  //进行编译。
      在build/windows/runner 会生成Visual Studio的解决方案工程,可以导入进行开发。
      生成的exe在build/windows/runner/Release/*.exe
      
    • linux

      //linux需要安装以下依赖
      sudo apt-get update
      sudo apt install clang
      sudo apt install cmake
      sudo apt install ninja-build
      sudo apt install libgtk-3-dev
      
      
      file INSTALL cannot copy file  //出现这个问题
      flutter clean  //执行这个然后重启AndroidStudio
      
      flutter build linux //生成release包,文件在build/linux/release/bundle下面
      
      使用adb出现adb devices => no permissions (user in plugdev group; are your udev rules wrong?) [duplicate]
      参考地址解决:https://stackoverflow.com/questions/53887322/adb-devices-no-permissions-user-in-plugdev-group-are-your-udev-rules-wrong
      
      
    • macos

      安装Xcode,然后在编译的时候遇到很多小问题。然后百度解决了,其中一个
      [tool_crash] Invalid argument(s): Cannot find executable for /Users/imac/Documents/FlutterSDK/flutter/bin/cache/artifacts
      解决方案:https://github.com/flutter/flutter/issues/85107
      
      flutter build macos //生成release包,文件在build/macos/Build/Products/Release/下面
      将mac目录下的文件倒入xcode可进行开发
      

    截图展示

    • windows(1920*1080) screenshots/windows.png

    • linux (1920*1080) screenshots/linux.png

    • macos (1440*960) screenshots/macos.png

    其他

    Visit original content creator repository https://github.com/LuckyLi706/flutter_mobile_command_tools
  • Codey

    Codey

    MavenBuild Actions Status

    Discord bot to compile and run code and fix code formatting, all without leaving discord and just a single click (reaction) away. Supports tons of languages and automatically finds runnable code in messages.

    capabilities:

    • compile and run code from discord messages
    • reformat java code and fix indentation
    • reminder function
    • listen to GitHub repository events and posts summary on push
    • show date and time from messages in local timezone
    • send get requests via slash command

    Codey Demo

    discord invite link to invite the bot to your server

    paste the link into a browser window and select your server for the bot to join.

    https://discord.com/api/oauth2/authorize?client_id=779383631255961640&permissions=11328&scope=bot

    Build and run in docker

    • replace change-me with the jda token in docker-compose.yml
    • docker-compose build && docker-compose up -d

    run

    • replace change-me in application.yml when running with spring-boot:run or add the parameter codey.token with your bots token to your run configuration in your IDE.
    • run the main method in class CodeyApplication or the maven task spring-boot:run

    How to get a discord token and invite your own bot to your server

    • visit and login https://discord.com/developers/applications
    • create a New Application, give it a name
    • click Bot and then Add Bot
    • to reveal your token click Click To Reveal Token and copy that token to the places described above depending on how you run it
    • select OAuth2
    • check bot in Scopes
    • after checking bot you can select the bot permissions Send Messages, Add Reactions, View Channels and Manage Messages (or whatever it is your bot needs if you don’t intend to build the one in this repo)
    • now click on Copy next to the generated link and paste it into a browser window.
    • select your server to let the bot join.
    • if you run your bot now it shows as online in your servers memberlist.
    Visit original content creator repository https://github.com/yours-truly-phil/Codey
  • line-message-analyzer

    📈 Visualizing your LINE message with plots and numbers!

    Try it out! https://chonyy.github.io/line-message-analyzer/

    For more information and instructions https://www.dcard.tw/f/funny/p/233240764

    Frequently Asked Question

    Q: 這是 LINE 官方的功能嗎? (Is this LINE’s official solution?)

    A: 不是,這是個人的 Side Project,和 LINE 公司 LY Corporation 沒有任何關係

    A: No, this is a personal side project. It has no relation with LINE or LY Corporation

    Q: 聊天紀錄個資外洩?

    A: 所有的計算都是在使用者的本地端執行,網頁後端並不會留下任何資料,小弟也有附上所有的程式碼了,裡面是沒有任何 server 端的 code ,所以我絕對沒辦法知道使用者上傳了甚麼,可以請你們放心。

    如果真的還是不信任但又很想計算的話,可以從下方把程式碼 clone 下來自己 host,一樣是可以計算的,不用網路也可以。小弟這裡只是提供一個平台讓大家方便,不用載下程式碼,開一個網頁就可以計算,若您把程式碼載下來自己算也是完全一樣的。

    Q:可以分析群組的聊天嗎?

    A:群組也可以使用,但是在版面上只會顯示講了最多話的兩個人。若想要完整的群組分析資訊可以使用我寫的舊版

    Q:抓不到貼圖和照片,文字雲裡有我的名子?

    A:用戶名裡請不要有空格,並且盡量簡短。

    Q:無法使用或版面很奇怪?

    A:建議複製網址到 Chrome上使用。

    Q:只能用一次,按上一頁後無法使用?

    A:請重新整理頁面後即可使用。

    Q:等了十分鐘都跑不出來?

    A:原則上除非有一百萬則訊息,不然在電腦上最多三分鐘內會跑出來,跑不出來應該是因為 Github page 不太穩定導致當機了,可以等五分鐘後或換一個連結再試試,我跑不出來的朋友等一下下之後再試就馬上跑好了。

    Q:結果很奇怪還出現AM和PM?

    A:請將LINE的語言換成中文的再試一次,因為小弟很懶,沒有顧慮到英文版的格式( ಠ ಠ )

    Q:The analyzing result looks weird and AM, PM is appearing on my username?

    A: It should work after changing your LINE app to Chinese version. Sad to say, I’m too lazy to build an English version for this. ( ಠ ಠ )

    Responsive Design

    Responsive design is implemented to make the website available on both desktop and mobile.

    However, trying out the website on desktop is more recommended.

    Visit original content creator repository https://github.com/chonyy/line-message-analyzer
  • line-message-analyzer

    📈 Visualizing your LINE message with plots and numbers!

    Try it out! https://chonyy.github.io/line-message-analyzer/

    For more information and instructions https://www.dcard.tw/f/funny/p/233240764

    Frequently Asked Question

    Q: 這是 LINE 官方的功能嗎? (Is this LINE’s official solution?)

    A: 不是,這是個人的 Side Project,和 LINE 公司 LY Corporation 沒有任何關係

    A: No, this is a personal side project. It has no relation with LINE or LY Corporation

    Q: 聊天紀錄個資外洩?

    A: 所有的計算都是在使用者的本地端執行,網頁後端並不會留下任何資料,小弟也有附上所有的程式碼了,裡面是沒有任何 server 端的 code ,所以我絕對沒辦法知道使用者上傳了甚麼,可以請你們放心。

    如果真的還是不信任但又很想計算的話,可以從下方把程式碼 clone 下來自己 host,一樣是可以計算的,不用網路也可以。小弟這裡只是提供一個平台讓大家方便,不用載下程式碼,開一個網頁就可以計算,若您把程式碼載下來自己算也是完全一樣的。

    Q:可以分析群組的聊天嗎?

    A:群組也可以使用,但是在版面上只會顯示講了最多話的兩個人。若想要完整的群組分析資訊可以使用我寫的舊版

    Q:抓不到貼圖和照片,文字雲裡有我的名子?

    A:用戶名裡請不要有空格,並且盡量簡短。

    Q:無法使用或版面很奇怪?

    A:建議複製網址到 Chrome上使用。

    Q:只能用一次,按上一頁後無法使用?

    A:請重新整理頁面後即可使用。

    Q:等了十分鐘都跑不出來?

    A:原則上除非有一百萬則訊息,不然在電腦上最多三分鐘內會跑出來,跑不出來應該是因為 Github page 不太穩定導致當機了,可以等五分鐘後或換一個連結再試試,我跑不出來的朋友等一下下之後再試就馬上跑好了。

    Q:結果很奇怪還出現AM和PM?

    A:請將LINE的語言換成中文的再試一次,因為小弟很懶,沒有顧慮到英文版的格式( ಠ ಠ )

    Q:The analyzing result looks weird and AM, PM is appearing on my username?

    A: It should work after changing your LINE app to Chinese version. Sad to say, I’m too lazy to build an English version for this. ( ಠ ಠ )

    Responsive Design

    Responsive design is implemented to make the website available on both desktop and mobile.

    However, trying out the website on desktop is more recommended.

    Visit original content creator repository https://github.com/chonyy/line-message-analyzer
  • yaml-language-server

    CI version Coverage Status

    YAML Language Server

    Supports JSON Schema 7 and below. Starting from 1.0.0 the language server uses eemeli/yaml as the new YAML parser, which strictly enforces the specified YAML spec version. Default YAML spec version is 1.2, it can be changed with yaml.yamlVersion setting.

    Features

    1. YAML validation:
      • Detects whether the entire file is valid yaml
    2. Validation:
      • Detects errors such as:
        • Node is not found
        • Node has an invalid key node type
        • Node has an invalid type
        • Node is not a valid child node
      • Detects warnings such as:
        • Node is an additional property of parent
    3. Auto completion:
      • Auto completes on all commands
      • Scalar nodes autocomplete to schema’s defaults if they exist
    4. Hover support:
      • Hovering over a node shows description if available
    5. Document outlining:
      • Shows a complete document outline of all nodes in the document

    Language Server Settings

    The following settings are supported:

    • yaml.yamlVersion: Set default YAML spec version (1.2 or 1.1)
    • yaml.format.enable: Enable/disable default YAML formatter (requires restart)
    • yaml.format.singleQuote: Use single quotes instead of double quotes
    • yaml.format.bracketSpacing: Print spaces between brackets in objects
    • yaml.format.proseWrap: Always: wrap prose if it exceeds the print width, Never: never wrap the prose, Preserve: wrap prose as-is
    • yaml.format.printWidth: Specify the line length that the printer will wrap on
    • yaml.validate: Enable/disable validation feature
    • yaml.hover: Enable/disable hover
    • yaml.completion: Enable/disable autocompletion
    • yaml.schemas: Helps you associate schemas with files in a glob pattern
    • yaml.schemaStore.enable: When set to true the YAML language server will pull in all available schemas from JSON Schema Store
    • yaml.schemaStore.url: URL of a schema store catalog to use when downloading schemas.
    • yaml.customTags: Array of custom tags that the parser will validate against. It has two ways to be used. Either an item in the array is a custom tag such as “!Ref” and it will automatically map !Ref to scalar or you can specify the type of the object !Ref should be e.g. “!Ref sequence”. The type of object can be either scalar (for strings and booleans), sequence (for arrays), map (for objects).
    • yaml.maxItemsComputed: The maximum number of outline symbols and folding regions computed (limited for performance reasons).
    • [yaml].editor.tabSize: the number of spaces to use when autocompleting. Takes priority over editor.tabSize.
    • editor.tabSize: the number of spaces to use when autocompleting. Default is 2.
    • http.proxy: The URL of the proxy server that will be used when attempting to download a schema. If it is not set or it is undefined no proxy server will be used.
    • http.proxyStrictSSL: If true the proxy server certificate should be verified against the list of supplied CAs. Default is false.
    • [yaml].editor.formatOnType: Enable/disable on type indent and auto formatting array
    • yaml.disableDefaultProperties: Disable adding not required properties with default values into completion text
    • yaml.suggest.parentSkeletonSelectedFirst: If true, the user must select some parent skeleton first before autocompletion starts to suggest the rest of the properties.\nWhen yaml object is not empty, autocompletion ignores this setting and returns all properties and skeletons.
    • yaml.style.flowMapping : Forbids flow style mappings if set to forbid
    • yaml.style.flowSequence : Forbids flow style sequences if set to forbid
    • yaml.keyOrdering : Enforces alphabetical ordering of keys in mappings when set to true. Default is false
    Adding custom tags

    In order to use the custom tags in your YAML file you need to first specify the custom tags in the setting of your code editor. For example, we can have the following custom tags:

    "yaml.customTags": [
        "!Scalar-example scalar",
        "!Seq-example sequence",
        "!Mapping-example mapping"
    ]

    The !Scalar-example would map to a scalar custom tag, the !Seq-example would map to a sequence custom tag, the !Mapping-example would map to a mapping custom tag.

    We can then use the newly defined custom tags inside our YAML file:

    some_key: !Scalar-example some_value
    some_sequence: !Seq-example
      - some_seq_key_1: some_seq_value_1
      - some_seq_key_2: some_seq_value_2
    some_mapping: !Mapping-example
      some_mapping_key_1: some_mapping_value_1
      some_mapping_key_2: some_mapping_value_2
    Associating a schema to a glob pattern via yaml.schemas:

    yaml.schemas applies a schema to a file. In other words, the schema (placed on the left) is applied to the glob pattern on the right. Your schema can be local or online. Your schema path must be relative to the project root and not an absolute path to the schema.

    For example: If you have project structure

    myProject

       > myYamlFile.yaml

    you can do

    yaml.schemas: {
        "https://json.schemastore.org/composer": "/myYamlFile.yaml"
    }

    and that will associate the composer schema with myYamlFile.yaml.

    More examples of schema association:

    Using yaml.schemas settings

    Single root schema association:

    When associating a schema it should follow the format below

    yaml.schemas: {
        "url": "globPattern",
        "Kubernetes": "globPattern"
    }

    e.g.

    yaml.schemas: {
        "https://json.schemastore.org/composer": "/*"
    }

    e.g.

    yaml.schemas: {
        "kubernetes": "/myYamlFile.yaml"
    }

    e.g.

    yaml.schemas: {
        "https://json.schemastore.org/composer": "/*",
        "kubernetes": "/myYamlFile.yaml"
    }

    On Windows with full path:

    yaml.schemas: {
        "C:\\Users\\user\\Documents\\custom_schema.json": "someFilePattern.yaml",
    }

    On Mac/Linux with full path:

    yaml.schemas: {
        "/home/user/custom_schema.json": "someFilePattern.yaml",
    }

    Since 0.11.0 YAML Schemas can be used for validation:

     "/home/user/custom_schema.yaml": "someFilePattern.yaml"

    A schema can be associated with multiple globs using a json array, e.g.

    yaml.schemas: {
        "kubernetes": ["filePattern1.yaml", "filePattern2.yaml"]
    }

    e.g.

    "yaml.schemas": {
        "http://json.schemastore.org/composer": ["/*"],
        "file:///home/johnd/some-schema.json": ["some.yaml"],
        "../relative/path/schema.json": ["/config*.yaml"],
        "/Users/johnd/some-schema.json": ["some.yaml"],
    }

    e.g.

    "yaml.schemas": {
        "kubernetes": ["/myYamlFile.yaml"]
    }

    e.g.

    "yaml.schemas": {
        "http://json.schemastore.org/composer": ["/*"],
        "kubernetes": ["/myYamlFile.yaml"]
    }

    Multi root schema association:

    You can also use relative paths when working with multi root workspaces.

    Suppose you have a multi root workspace that is laid out like:

    My_first_project:
       test.yaml
       my_schema.json
    My_second_project:
       test2.yaml
       my_schema2.json

    You must then associate schemas relative to the root of the multi root workspace project.

    yaml.schemas: {
        "My_first_project/my_schema.json": "test.yaml",
        "My_second_project/my_schema2.json": "test2.yaml"
    }

    yaml.schemas allows you to specify json schemas that you want to validate against the yaml that you write. Kubernetes is an optional field. It does not require a url as the language server will provide that. You just need the keyword kubernetes and a glob pattern.

    Nested Schema References

    Suppose a file is meant to be a component of an existing schema (like a job.yaml file in a circleci orb), but there isn’t a standalone schema that you can reference. If there is a nested schema definition for this subcomponent, you can reference it using a url fragment, e.g.:

    yaml.schemas: {
        "https://json.schemastore.org/circleciconfig#/definitions/jobs/additionalProperties": "/src/jobs/*.yaml",
    }

    Note This will require reading your existing schema and understanding the schemastore structure a bit. (TODO: link to a documentation or blog post here?)

    Using inlined schema

    It is possible to specify a yaml schema using a modeline.

    # yaml-language-server: $schema=<urlToTheSchema>

    Also it is possible to use relative path in a modeline:

    # yaml-language-server: $schema=../relative/path/to/schema

    or absolute path:

    # yaml-language-server: $schema=/absolute/path/to/schema

    Schema priority

    The following is the priority of schema association in highest to lowest priority:

    1. Modeline
    2. CustomSchemaProvider API
    3. yaml.settings
    4. Schema association notification
    5. Schema Store

    Containerized Language Server

    An image is provided for users who would like to use the YAML language server without having to install dependencies locally.

    The image is located at quay.io/redhat-developer/yaml-language-server

    To run the image you can use:

    docker run -it quay.io/redhat-developer/yaml-language-server:latest

    Language Server Protocol version

    yaml-language-server use vscode-languageserver@7.0.0 which implements LSP 3.16

    Language Server Protocol extensions

    SchemaSelectionRequests

    SupportSchemaSelection Notification

    The support schema selection notification is sent from a client to the server to inform server that client supports JSON Schema selection.

    Notification:

    • method: 'yaml/supportSchemaSelection'
    • params: void

    SchemaStoreInitialized Notification

    The schema store initialized notification is sent from the server to a client to inform client that server has finished initializing/loading schemas from schema store, and client now can ask for schemas.

    Notification:

    • method: 'yaml/schema/store/initialized'
    • params: void

    GetAllSchemas Request

    The get all schemas request sent from a client to server to get all known schemas.

    Request:

    • method: 'yaml/get/all/jsonSchemas';
    • params: the document uri, server will mark used schema for document

    Response:

    • result: JSONSchemaDescriptionExt[]
    interface JSONSchemaDescriptionExt {
      /**
       * Schema URI
       */
      uri: string;
      /**
       * Schema name, from schema store
       */
      name?: string;
      /**
       * Schema description, from schema store
       */
      description?: string;
      /**
       * Is schema used for current document
       */
      usedForCurrentFile: boolean;
      /**
       * Is schema from schema store
       */
      fromStore: boolean;
    }

    GetSchemas Request

    The request sent from a client to server to get schemas used for current document. Client can use this method to indicate in UI which schemas used for current YAML document.

    Request:

    • method: 'yaml/get/jsonSchema';
    • params: the document uri to get used schemas

    Response:

    • result: JSONSchemaDescription[]
    interface JSONSchemaDescriptionExt {
      /**
       * Schema URI
       */
      uri: string;
      /**
       * Schema name, from schema store
       */
      name?: string;
      /**
       * Schema description, from schema store
       */
      description?: string;
    }

    Clients

    This repository only contains the server implementation. Here are some known clients consuming this server:

    Developer Support

    Getting started

    1. Install prerequisites:
    2. Fork and clone this repository
    3. Install the dependencies
      cd yaml-language-server
      $ npm install
    4. Build the language server
      $ npm run build
    5. The new built server is now located in ./out/server/src/server.js.
      node (Yaml Language Server Location)/out/server/src/server.js [--stdio]

    Connecting to the language server via stdio

    We have included the option to connect to the language server via stdio to help with integrating the language server into different clients.

    ESM and UMD Modules

    Building the YAML Language Server produces CommonJS modules in the /out/server/src directory. In addition, a build also produces UMD (Universal Module Definition) modules and ES Modules (ESM) in the /lib directory. That gives you choices in using the YAML Language Server with different module loaders on the server side and in the browser with bundlers like webpack.

    CI

    We use a GitHub Action to publish each change in the main branch to npm registry with the next tag. You may use the next version to adopt the latest changes into your project.

    Visit original content creator repository https://github.com/redhat-developer/yaml-language-server
  • marienbad

    marienbad

    You can find here a marienbad game (called also NIM) coded in C language with 3 different AI lvl (easy, medium, hard) with the ncurses library.

    Marienbad demo

    I coded this project during my studies, basically the aim was to create a text-version marienbad with an IA. I went further and made it more user-friendly with a better graphical approach using the ncurses library and added other features including 3 AI lvls, 3 games mode (Player vs IA, Player vs Player, IA vs IA), 2 different game structures (Rectange/Pyramid), the choice of the structure size, screenshots (for fun, with a fork/execvp on import), menus and game duration.

    I also coded the basic text-version which is accessible in the repository text-version.

    Requirements :

    make

    gcc

    Usage :

    1. git clone https://github.com/neoski/marienbad.git

    2. cd marienbad && make

    3. ./marienbad

    Note : The current ncurses library path used for the compilation works for MAC OS X.
    It’s possible that you might need to change it depending of its location.
    You can find it by typing man curses on your shell.
    Then replace # include <ncurses.h> and # include <curses.h> by the good path in include/allum.h.

    Author : Sebastien S.

    Github repository : https://github.com/neoski/marienbad

    Made during my studies in february 2015.

    Visit original content creator repository
    https://github.com/neoski/marienbad

  • re-frame-fetch-fx

    Clojars Project GitHub issues License

    This re-frame library contains an Effect Handler for fetching resources.

    Keyed :fetch, it wraps browsers’ native js/fetch API.

    Add the following project dependency: Clojars Project

    Requires re-frame >= 0.8.0.

    In the namespace where you register your event handlers, prehaps called events.cljs, you have two things to do.

    First, add this require to the ns:

    (ns app.events
      (:require
       ...
       [superstructor.re-frame.fetch-fx]
       ...))

    Because we never subsequently use this require, it appears redundant. But its existence will cause the :fetch effect handler to self-register with re-frame, which is important to everything that follows.

    Second, write an event handler which uses this effect:

    (reg-event-fx
      :handler-with-fetch
      (fn [{:keys [db]} _]
        {:fetch {:method                 :get
                 :url                    "https://api.github.com/orgs/day8"
                 :mode                   :cors
                 :timeout                5000
                 :response-content-types {#"application/.*json" :json}
                 :on-success             [:good-fetch-result]
                 :on-failure             [:bad-fetch-result]}}))

    With the exception of JSON there is no special handling of the :body value or the request’s Content-Type header. So for anything except JSON you must handle that yourself.

    For convenience for JSON requests :request-content-type :json is supported which will:

    • set the Content-Type header of the request to application/json

    • evaluate clj→js on the :body then js/JSON.stringify it.

    :response-content-type is a mapping of pattern or string to a keyword representing one of the following processing models in Table 1.

    The pattern or string will be matched against the response Content-Type header then the associated keyword is used to determine the processing model and result type.

    In the absence of a response Content-Type header the value that is matched against will default to text/plain.

    In the absence of a match the processing model will default to :text.

    Table 1. Response Content Types
    Keyword Processing Result Type

    :json

    json() then js→clj :keywordize-keys true

    ClojureScript

    :text

    text()

    String

    :form-data

    formData()

    js/FormData

    :blob

    blob()

    js/Blob

    :array-buffer

    arrayBuffer()

    js/ArrayBuffer

    All possible values of a :fetch map.

    (reg-event-fx
      :handler-with-fetch
      (fn [{:keys [db]} _]
        {:fetch {;; Required. Can be one of:
                 ;; :get | :head | :post | :put | :delete | :options | :patch
                 :method                 :get
    
                 ;; Required.
                 :url                    "https://api.github.com/orgs/day8"
    
                 ;; Optional. Can be one of:
                 ;; ClojureScript Collection | String | js/FormData | js/Blob | js/ArrayBuffer | js/BufferSource | js/ReadableStream
                 :body                   "a string"
    
                 ;; Optional. Only valid with ClojureScript Collection as :body.
                 :request-content-type   :json
    
                 ;; Optional. Map of URL query params
                 :params                 {:user     "Fred"
                                          :customer "big one"}
    
                 ;; Optional. Map of HTTP headers.
                 :headers                {"Authorization"  "Bearer QWxhZGRpbjpvcGVuIHNlc2FtZQ=="
                                          "Accept"         "application/json"}
    
                 ;; Optional. Defaults to :same-origin. Can be one of:
                 ;; :cors | :no-cors | :same-origin | :navigate
                 ;; See https://developer.mozilla.org/en-US/docs/Web/API/Request/mode
                 :mode                   :cors
    
                 ;; Optional. Defaults to :include. Can be one of:
                 ;; :omit | :same-origin | :include
                 ;; See https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
                 :credentials            :omit
    
                 ;; Optional. Defaults to :follow. Can be one of:
                 ;; :follow | :error | :manual
                 ;; See https://developer.mozilla.org/en-US/docs/Web/API/Request/redirect
                 :redirect               :follow
    
                 ;; Optional. Can be one of:
                 ;; :default | :no-store | :reload | :no-cache | :force-cache | :only-if-cached
                 ;; See https://developer.mozilla.org/en-US/docs/Web/API/Request/cache
                 :cache                  :default
    
                 ;; Optional. Can be one of:
                 ;; :no-referrer | :client
                 ;; See https://developer.mozilla.org/en-US/docs/Web/API/Request/referrer
                 :referrer               :client
    
                 ;; See https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
                 :integrity              "sha256-BpfBw7ivV8q2jLiT13fxDYAe2tJllusRSZ273h2nFSE="
    
                 :timeout                5000
    
                 :response-content-types {#"application/.*json"      :json
                                          "text/plain"               :text
                                          "multipart/form-data"      :form-data
                                          #"image/.*"                :blob
                                          "application/octet-stream" :array-buffer}
    
                 ;; Optional. If you want to associate multiple requests with a single
                 ;; AbortSignal you can pass it as value for the :abort-signal and use your own
                 ;; (external) AbortController to handle aborts.
                 :abort-signal            (.-signal (js/AbortController)
    
                 ;; Use :request-id with ::abort effect to abort the request
                 ;; Note: when using :abort-signal you cannot abort the request using :request-id
                 :request-id             :my-custom-request-id
                 ;; or auto-generated
                 :on-request-id          [:fetch-request-id]
    
                 :on-success             [:good-fetch-result]
    
                 :on-failure             [:bad-fetch-result]}}))

    There are two different ways you can abort requests:

    • Abort a (single) request by passing its request-id to the ::abort effect:

    (reg-event-fx
      :abort-request
      (fn [_ [request-id]]
        {::abort {:request-id request-id}}))

    Note: Reusing the same request-id for multiple different requests will not work. The ::abort effect would only abort the last of these requests.

    • Abort multiple requests by using an external AbortController. Pass the AbortController’s AbortSignal instance as value for the :abort-signal inside the ::fetch effect map.

    Note: When you decide to use an external AbortController by passing its :abort-signal in the ::fetch map, you cannot abort this request via the ::abort effect anymore.

    :on-success is dispatched with a response map like:

    {:url         "http://localhost..."
     :ok?         true
     :redirected? false
     :status      200
     :status-text "OK"
     :type        "cors"
     :final-uri?  nil
     :body        "Hello World!"
     :headers     {:cache-control "private, max-age=0, no-cache" ...}}

    Note the type of :body changes drastically depending on both the provided :response-content-types map and the response’s Content-Type header.

    Unfortunately for cases where there is no server response the js/fetch API provides terribly little information that can be captured programatically. If :on-failure is dispatched with a response like:

    {:problem         :fetch
     :problem-message "Failed to fetch"}

    Then it may be caused by any of the following or something else not included here:

    • :url syntax error

    • unresolvable hostname in :url

    • no network connection

    • Content Security Policy

    • Cross-Origin Resource Sharing (CORS) Policy or lacking :mode :cors

    Look in the Chrome Developer Tools console. There is usually a useful error message indicating the problem but so far I have not found out how to capture it to provide more fine grained :problem keywords.

    If :timeout is exceeded, :on-failure will be dispatched with a response like:

    {:problem         :timeout
     :problem-message "Fetch timed out"}

    If there is a problem reading the body after the server has responded, such as a JSON syntax error, :on-failure will be dispatched with a response like:

    {:problem         :body
     :reader          :json
     :problem-message "Unexpected token < in JSON at position 0"
     ... rest of normal response map excluding :body ... }

    If the server responds with an unsuccessful HTTP status code, such as 500 or 404, :on-failure will be dispatched with a response like:

    {:problem :server
     ... rest of normal response map ... }

    Previously with :http-xhrio it was keyed :uri.

    Now with :fetch we follow the Fetch Standard nomenclature so it is keyed :url.

    Previously with :http-xhrio URL parameters and the request body were both keyed as :params. Which one it was depended on the :method (i.e. GET would result in URL parameters whereas POST would result in a request body).

    Now with :fetch there are two keys.

    :params is only URL parameters. It will always be added to the URL regardless of :method.

    :body is the request body. In practice it is only supported for :put, :post and :patch methods. Theoretically HTTP request bodies are allowed for all methods except :trace, but just don’t as there be dragons.

    This has completely changed in every way including the keys used, how to specify the handling of the response body and the types of values used for the response body. See Request Content Type and Response Content Types.

    Previously with :http-xhrio CORS requests would ‘just work’.

    Now with :fetch :mode :cors must be set explicitly as the default mode for js/fetch is :same-origin which blocks CORS requests.

    Copyright © 2019 Isaac Johnston.

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    Visit original content creator repository https://github.com/superstructor/re-frame-fetch-fx
  • testcafe-browser-provider-browserstack

    testcafe-browser-provider-browserstack

    Tests

    This plugin integrates TestCafe with the BrowserStack Testing Cloud.

    Install

    npm i -g testcafe-browser-provider-browserstack

    Usage

    Before using this plugin, save the BrowserStack username and access key to environment variables BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY.

    Project name and build name will be displayed in BrowserStack if you set the BROWSERSTACK_PROJECT_NAME and BROWSERSTACK_BUILD_ID environment variables, or the project and build properties in the configuration file.

    If you have troubles starting multiple browsers at once, or get browserstack-local related errors like #27, try setting the BROWSERSTACK_PARALLEL_RUNS environment variable to the number of browsers you want to run simultaneously, or to 1 if you want to run just one browser.

    You can determine the available browser aliases by running

    testcafe -b browserstack

    If you run tests from the command line, use the alias when specifying browsers:

    testcafe "browserstack:Chrome@53.0:Windows 10" "path/to/test/file.js"

    When you use API, pass the alias to the browsers() method:

    testCafe
        .createRunner()
        .src('path/to/test/file.js')
        .browsers('browserstack:Chrome@53.0:Windows 10')
        .run();

    Tip: you can skip version (@53.0) or/and OS name (:Windows 10).

    BrowserStack Proxy Options

    Proxy options can be passed via environment variables.

    • BROWSERSTACK_PROXY – a string that specifies a proxy for the BrowserStack local binary. It should have the following structure: user:pass@proxyHostName:port,
    • BROWSERSTACK_LOCAL_PROXY – a string that specifies a proxy for the local web server. It should have the following structure: user:pass@proxyHostName:port,
    • BROWSERSTACK_FORCE_PROXY – if it’s not empty, forces all traffic of BrowserStack local binary to go through the proxy,
    • BROWSERSTACK_FORCE_LOCAL – if it’s not empty, forces all traffic of BrowserStack local binary to go through the local machine
    • BROWSERSTACK_NO_LOCAL – If it’s not empty, forces all traffic of BrowserStack to go over public internet
    • BROWSERSTACK_LOCAL_IDENTIFIER – a string identifier of an open BrowserStack local tunnel. If it’s not empty, a new local tunnel is not created. Instead, the browser provider uses an existing local tunnel with the specified identifier.

    Example:

    export BROWSERSTACK_PROXY="user:p@ssw0rd@proxy.com:8080"
    export BROWSERSTACK_LOCAL_PROXY="admin:12345678@192.168.0.2:8080"
    export BROWSERSTACK_FORCE_PROXY="1"
    export BROWSERSTACK_FORCE_LOCAL="1"
    testcafe browserstack:chrome test.js

    Other BrowserStackLocal Options

    This plugin also allows you to specify the following BrowserStackLocal options via environment variables:

    Option Environment Variable
    binarypath BROWSERSTACK_BINARY_PATH
    logFile BROWSERSTACK_LOGFILE
    verbose BROWSERSTACK_VERBOSE

    Example:

    export BROWSERSTACK_BINARY_PATH="~/BrowserStack/BrowserStackLocal"
    export BROWSERSTACK_LOGFILE="~/BrowserStack/logs.txt"
    export BROWSERSTACK_VERBOSE="1"
    testcafe browserstack:chrome test.js

    BrowserStack JS Testing and BrowserStack Automate

    BrowserStack offers two APIs for browser testing:

    JS testing supports more types of devices (compare: JS Testing Devices vs Automate Devices), while Automate allows for much longer tests (2 hours vs 30 minutes) and provides some additional features (like the window resizing functionality).

    TestCafe uses the JS Testing API by default. In order to use BrowserStack Automate, set the BROWSERSTACK_USE_AUTOMATE environment variable to 1.

    Example:

    export BROWSERSTACK_USE_AUTOMATE="1"
    testcafe browserstack:chrome test.js

    Setting Display Resolution

    To set the display resolution, use the BROWSERSTACK_DISPLAY_RESOLUTION environment variable or the resolution property in the configuration file. Valid resolutions can be found here.

    Remember that this only sets the display resolution and does not resize the browser window. You’ll still need to use TestCafe’s window resizing API to do so.

    Example:

    export BROWSERSTACK_DISPLAY_RESOLUTION="1024x768"
    testcafe browserstack:chrome test.js

    Specifying Chrome Command Line Arguments

    To set Chrome command line arguments, use the BROWSERSTACK_CHROME_ARGS environment variable. You can specify multiple arguments by joining them with the space symbol. This option works only if the BrowserStack Automate API is enabled.

    Examples:

    export BROWSERSTACK_USE_AUTOMATE="1"
    export BROWSERSTACK_CHROME_ARGS="--autoplay-policy=no-user-gesture-required"
    testcafe browserstack:chrome test.js
    export BROWSERSTACK_USE_AUTOMATE="1"
    export BROWSERSTACK_CHROME_ARGS="--start-maximized --autoplay-policy=no-user-gesture-required"
    testcafe browserstack:chrome test.js

    Other BrowserStack Options

    BrowserStack Automate allows you to provide options for its internal Selenium Grid in the form of key-value pairs called capabilities.

    To specify BrowserStack capabilities via the TestCafe BrowserStack provider, use environment variables or the configuration file. This provider supports the following capabilities:

    Capability Environment Variable
    project BROWSERSTACK_PROJECT_NAME
    build BROWSERSTACK_BUILD_ID (BROWSERSTACK_BUILD_NAME may also be used)
    resolution BROWSERSTACK_DISPLAY_RESOLUTION
    name BROWSERSTACK_TEST_RUN_NAME
    acceptSslCerts BROWSERSTACK_ACCEPT_SSL_CERTS
    browserstack.debug BROWSERSTACK_DEBUG
    browserstack.console BROWSERSTACK_CONSOLE
    browserstack.networkLogs BROWSERSTACK_NETWORK_LOGS
    browserstack.video BROWSERSTACK_VIDEO
    browserstack.timezone BROWSERSTACK_TIMEZONE
    browserstack.geoLocation BROWSERSTACK_GEO_LOCATION
    browserstack.customNetwork BROWSERSTACK_CUSTOM_NETWORK
    browserstack.networkProfile BROWSERSTACK_NETWORK_PROFILE

    Refer to the BrowserStack documentation for information about the values you can specify.

    Example

    export BROWSERSTACK_DEBUG="true"
    export BROWSERSTACK_TIMEZONE="UTC"
    testcafe browserstack:chrome test.js

    Configuration File

    You can specify BrowserStack capability options in a JSON configuration file as an alternative to environment variables. Use capability names for configuration file properties. If an option is set in both the configuration file and an environment variable, the environment variable setting takes priority.

    To use a configuration file, pass the file path in the BROWSERSTACK_CAPABILITIES_CONFIG_PATH environment variable:

    export BROWSERSTACK_CAPABILITIES_CONFIG_PATH="./data/browserstack-config.json"
    testcafe browserstack:chrome test.js

    browserstack-config.json

    {
        "build":                       "build-1",
        "project":                     "my-project",
        "resolution":                  "1024x768",
        "name":                        "Run 1",
        "browserstack.debug":          true,
        "browserstack.console":        "errors",
        "browserstack.networkLogs":    true
    }

    Exceeding the Parallel Test Limit

    When you run tests in multiple browsers or concurrently, you may exceed the maximum number of parallel tests available for your account.

    Assume your plan allows 2 parallel tests, and you run one of the following commands:

    testcafe 'browserstack:ie@11.0:Windows 10','browserstack:chrome@59.0:Windows 10','browserstack:safari@9.1:OS X El Capitan' tests/acceptance/
    testcafe browserstack:ie@11.0:Windows 10 -c3 tests/acceptance/

    In this instance, BrowserStack will refuse to provide all the required machines and TestCafe will throw an error:

    Unable to establish one or more of the specified browser connections.
    

    To keep within your account limitations, you can run tests sequentially (or in batches), like in the following bash script (credits to @maoberlehner for this example):

    browsers=( "browserstack:ie@10.0:Windows 8" "browserstack:ie@11.0:Windows 10" "browserstack:edge@15.0:Windows 10" "browserstack:edge@14.0:Windows 10" "browserstack:firefox@54.0:Windows 10" "browserstack:firefox@55.0:Windows 10" "browserstack:chrome@59.0:Windows 10" "browserstack:chrome@60.0:Windows 10" "browserstack:opera@46.0:Windows 10" "browserstack:opera@47.0:Windows 10" "browserstack:safari@9.1:OS X El Capitan" "browserstack:safari@10.1:OS X Sierra" )
    
    for i in "${browsers[@]}"
    do
    	./node_modules/.bin/testcafe "${i}" tests/acceptance/
    done

    Configuring the API Polling Interval for BrowserStack Automate

    BrowserStack Automate is based on WebDriver, which forcefully shuts down inactive sessions after an idle timeout expires. This works for WebDriver users, since each page action (clicks, types, etc.) triggers a WebDriver command and thus resets the idle timer.

    However, TestCafe is not WebDriver-based. It simulates page actions in a different way and it doesn’t trigger WebDriver commands. To prevent test session from being terminated by the BrowserStack WebDriver server due to inactivity, TestCafe triggers a dummy WebDriver command once in a while.

    However, if the network connection is unstable, a request that triggers this dummy command can fail. In this instance, the BrowserStack WebDriver server doesn’t receive the command before the idle timeout expires, and the test session can be terminated due to inactivity.

    If your BrowserStack builds are terminated due to the idle timeout frequently, you can try to decrease the delay before the dummy WebDriver command is sent. In case the first request fails to trigger the command due to a network problem, the next may succeed and thus prevent your test session from being terminated.

    Use the TESTCAFE_BROWSERSTACK_API_POLLING_INTERVAL environment variable to control this delay. This variable specifies time (in millisecinds) to pass until an additional request that triggers an dummy WebDriver command is sent to the BrowserStack WebDriver server. The default delay is 80000 millisecinds. If the BrowserStack idle timeout is 90 seconds (or 90000 milliseconds), at least one request is processed by the BrowserStack server in normal network conditions. If you set it to 40000, two requests are processed by the WebDriver server if your network is good. In case of network issues, either request may fail without breaking the build.

    Example

    export TESTCAFE_BROWSERSTACK_API_POLLING_INTERVAL="40000"
    testcafe browserstack:chrome test.js

    See Also

    You can also refer to the BrowserStack documentation for a detailed step-by-step guide that explains how to run TestCafe tests on BrowserStack.

    Author

    Developer Express Inc. (https://devexpress.com)

    Visit original content creator repository https://github.com/DevExpress/testcafe-browser-provider-browserstack
  • express-guides

    Express Guides

    Authentication

    command

    npm init -y
    npm i cors express mysql2 jsonwebtoken cookie-parser express-session bcrypt

    1. Set Token in local storage and send it to server in header

    รับ token จาก server แล้วเก็บ token ไว้ใน local storage แล้วมีการส่งไปที่ server ผ่าน header โดยใช้ fetch api

    จุดสังเกต จะมีการส่ง token หลังจาก login ผ่าน response

    res.json({ message: 'Login successfully', token })

    ส่วน client จะเก็บ token ที่ได้ไว้ใน local storage

    localStorage.setItem('token', response.data.token);

    เมื่อ request api ผ่าน Promise fn เช่น axios มีการส่ง token ผ่าน header ไปด้วย

    axios.get('http://localhost:3000/api/user', {
      headers: {
        Authorization: `Bearer ${token}`
      }
    })

    2. Set Token in cookie and send it to server

    ในกรณีที่มีการเก็บ token ไว้ใน cookie แล้วส่งไปที่ server ผ่าน header โดยใช้ fetch api

    จุดสังเกต จะมีการส่ง cookie ที่ภายในมี token หลังจาก login ผ่าน response

    res.cookie('token', token, {
      httpOnly: true,
      secure: true,
      sameSite: 'none'
    }).json({ message: 'Login successfully' });

    ส่วน client จะเก็บ token ที่ได้ไว้ใน cookie โดยอัตโนมัติ

    เมื่อ request api ผ่าน Promise fn เช่น axios มีการส่ง withCredentials: true แทนการส่ง token ผ่าน header

    axios.get('http://localhost:3000/api/user', {
      withCredentials: true
    })

    3. Set Token in session into server

    ในกรณีนี้ server มีการเก็บ token ไว้ใน session ซึ่งเป็นการเก็บข้อมูลไว้ใน server โดยตรง

    จุดสังเกต จะมีการส่ง session ที่ภายในมี token หลังจาก login ผ่าน response

    req.session.token = token;
    res.json({ message: 'Login successfully' });

    ส่วน client จะไม่ต้องทำอะไรเพิ่มเติม ตอนส่ง request api ผ่าน Promise fn เช่น axios ไม่ต้องส่ง token ไปด้วย
    มีการส่ง withCredentials: true แค่อย่างเดียว เหมือนกับกรณีที่เก็บ token ไว้ใน cookie

    axios.get('http://localhost:3000/api/user', {
      withCredentials: true
    })

    วิธีการนี้ จะเป็นการพึ่งพาการเก็บข้อมูลไว้ใน server โดยตรง และไม่ต้องเก็บข้อมูลไว้ที่ client และไม่ต้องส่งข้อมูลไปที่ server ทุกครั้งที่ request api

    File Uploads

    1. Upload file to server

    เป็นการเก็บ Binary Format (ฺฺBlob) ของไฟล์ไว้ใน server โดยตรง

    2. File Uploads with Progress Bar

    ใน axios มีการส่ง onUploadProgress ที่เป็น callback function ที่จะทำงานเมื่อมีการ upload file โดยจะส่งค่าเป็น event ที่มีค่า loaded และ total ที่เป็นขนาดของไฟล์ที่ถูก upload และขนาดของไฟล์ทั้งหมด

    const response = await axios
      .post('http://localhost:8000/api/upload', formData, {
        headers: {
          'Content-Type': 'multipart/form-data',
        },
        onUploadProgress: function(progressEvent) {
          // เพิ่ม update progress กลับเข้า UI ไป
          const percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total)
          progressBar.value = percentCompleted
          uploadPercentageDisplay.innerText = `${percentCompleted}%`
        },
      })

    3. Validation

    เราสามารถ validate ไฟล์ที่ upload ได้ทั้งที่ client และ server โดยสามารถ validate ได้ทั้งขนาดของไฟล์และประเภทของไฟล์

    Validation: size

    client สามารถทำการ validate ไฟล์ก่อนที่จะส่งไปที่ server ได้เช่น ตรวจสอบขนาดของไฟล์

    const selectedFile = fileInput.files[0]
    if (selectedFile.size > 1024 * 1024 * 5) {
      return alert('Too large file, please choose a file smaller than 5MB')
    }

    server สามารถทำการ validate ไฟล์ที่ได้รับได้เช่น ตรวจสอบขนาดของไฟล์ และประเภทของไฟล์

    const upload = multer({
      storage,
      limits: {
        fileSize: 1024 * 1024 * 5, // 5MB
      },
    })

    Validation: mimeType

    mimeType คือประเภทของไฟล์ เช่น image/jpeg, image/png, application/pdf เป็นต้น ไม่เกี่ยวกันกับนามสกุลของไฟล์

    client สามารถทำการ validate ไฟล์ก่อนที่จะส่งไปที่ server ได้เช่น ตรวจสอบประเภทของไฟล์

    const selectedFile = fileInput.files[0]
    if (!['image/jpeg', 'image/png', 'application/pdf'].includes(selectedFile.type)) {
      return alert('Invalid file type, please choose a valid file type')
    }

    server สามารถทำการ validate ไฟล์ที่ได้รับได้เช่น ตรวจสอบประเภทของไฟล์

    const upload = multer({
      storage,
      fileFilter: (req, file, cb) => {
        if (['image/jpeg', 'image/png', 'application/pdf'].includes(file.mimetype)) {
          cb(null, true)
        } else {
          cb(new Error('Invalid file type'))
        }
      },
    })

    เพื่อให้เอา error ที่เกิดขึ้นไปใช้งานต่อไป
    เราจึงปรับ app.post จากการใส่ middleware upload.single('test') เป็นการใข้ upload.array('test') ภายในแทน

    app.post('/api/upload', (req, res) => {
      upload.single('test')(req, res, (err) => {
        if (err) {
          return res.status(400).json({ message: 'Multer error' })
        }
        res.json({ message: 'File uploaded successfully' })
      })
    })

    4. Cancel Upload

    ในฝั่ง client สามารถใช้ axios ในการ cancel upload ได้โดยใช้ CancelToken และ source ในการสร้าง CancelToken และ cancel ในการยกเลิกการ upload

    ใน <script>...</script> ให้สร้างตัวแปร let currentSource = null ไว้เพื่อเก็บ source ที่สร้างขึ้น

    จากนั้น ใน uploadFile ให้สร้าง source และเก็บไว้ใน currentSource และเมื่อมีการกดปุ่ม cancel ให้เรียก cancelUploadBtn ซึ่งจะเรียก cancel ของ source ที่เก็บไว้ใน currentSource

    const source = axios.CancelToken.source() // สร้าง cancel token ขึ้นมา
    currentSource = source // เก็บ current source ไว้เพื่อใช้ในการ cancel ไฟล์

    เพิ่ม cancelToken: source.token ไปใน axios ที่ส่งไปที่ server

    const response = await axios.post('http://localhost:8000/api/upload', formData, {
      headers: {
        'Content-Type': 'multipart/form-data',
      },
      onUploadProgress: function(progressEvent) {
        const percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total)
        progressBar.value = percentCompleted
        uploadPercentageDisplay.innerText = `${percentCompleted}%`
      },
    + cancelToken: source.token, // ส่ง cancel token ไปให้ server
    })

    สร้าง cancelUploadBtn ซึ่งจะเรียก cancel ของ source ที่เก็บไว้ใน currentSource

    const cancelUploadBtn = () => {
      if (currentSource) {
        currentSource.cancel('Operation canceled by the user.')
      }
    }

    แล้วนำ cancelUploadBtn ไปใช้ในปุ่ม cancel ที่สร้างขึ้น

    <button onclick="cancelUploadBtn()">Cancel</button>

    5. Remove File after Cancel Upload

    เมื่อมีการ cancel upload ไฟล์ ให้ทำการลบไฟล์ที่ถูก upload ออกจาก server

    const fs = require('fs')
    const path = require('path')

    เพิ่ม event listener ใน filename ของ diskStorage ที่จะทำการลบไฟล์ที่ upload ออกจาก server เมื่อมีการ cancel upload

    const storage = multer.diskStorage({
      destination: (req, file, cb) => {
        cb(null, 'uploads/') // สร้าง folder ชื่อ uploads ใน root directory ของ project
      },
      filename: (req, file, cb) => {
        const filename = `${Date.now()}-${file.originalname}`
        cb(null, filename) // ใช้ชื่อเดิมของ file แต่เพิ่มเวลาที่ upload ขึ้นไปด้วย
    +    req.on('aborted', () => {
    +     // ถ้าเกิด error ในการ upload จะทำการลบ file ที่ upload ไปแล้ว
    +     const filePath = path.join('uploads', filename)
    +     fs.unlinkSync(filePath)
    +   })
      },
    })

    Cache Design Patterns

    ติดตั้ง Library ที่จำเป็น

    npm i body-parser mysql2 redis node-cron

    เริ่มต้น Project เหมือนกับตอนที่ทำ mysql เพียงแต่เพิ่มการเชื่อมต่อกับ redis และ cron

    const express = require('express')
    const bodyParser = require('body-parser')
    const mysql = require('mysql2')
    const redis = require('redis')
    const cron = require('node-cron')
    
    const app = express()
    app.use(bodyParser.json())
    const port = 8000
    
    /* เชื่อมต่อกับ mysql ตามปกติ (สร้าง initMySql()) */
    
    let redisConn = null
    
    const initRedis = async () => {
      redisConn = redis.createClient()
      redisConn.on('error', (err) => {
        console.log('Redis error: ' + err)
      })
      await redisConn.connect()
    }
    
    app.listen(port, async () => {
      await initMySql()
      await initRedis()
      console.log(`Server is running on port ${port}`)
    })

    Cache มี Sequent ทั้งหมด 3 แบบ

    1. Lazy loading (Cache-Aside)

    เป็นการเก็บข้อมูลไว้ใน cache โดยไม่ต้องทำการ load ข้อมูลเข้ามาทั้งหมด แต่จะทำการ load ข้อมูลเข้ามาเมื่อมีการ request ข้อมูลนั้นๆ

    2. Write-through

    3. Write-behind (Write-back)

    Elasticsearch

    Kafka Distribution System

    RabbitMQ

    Visit original content creator repository
    https://github.com/Washira/express-guides