Author: cajqdtgnbhc9

  • sistema-agendamento

    Sistema de Agendamento

    agendar-consultas

    Este é um projeto de sistema de agendamento desenvolvido como parte de estudos anteriores. O sistema permite que os usuários cadastrem e visualizem agendamentos em um calendário, vejam detalhes de agendamentos individuais e recebam notificações por e-mail quando uma consulta está próxima.

    Visão Geral

    O Sistema de Agendamento é uma aplicação web que oferece as seguintes funcionalidades:

    1. Visualização de um calendário completo usando FullCalendar.js.
    2. Exibição de agendamentos nas datas do calendário.
    3. Detalhes de agendamentos individuais, incluindo descrição, nome, CPF, data e horário.
    4. Finalização de consultas por meio de um botão.
    5. Rota de cadastro (/cadastrar) para criar novos agendamentos.
    6. Envio automático de notificações por e-mail quando a consulta está próxima de acontecer, usando Nodemailer.
    7. Utilização de EJS, CSS e Bootstrap para o frontend.
    8. Utilização de Express para o servidor.
    9. Integração com o MongoDB usando Mongoose.

    Tecnologias Utilizadas

    O projeto faz uso das seguintes tecnologias e pacotes:

    • FullCalendar.js: Uma biblioteca JavaScript para criação de calendários interativos.
    • EJS: Uma linguagem de modelagem embutida para gerar HTML com JavaScript.
    • CSS: Linguagem de estilo para a aparência do aplicativo.
    • Bootstrap: Um framework CSS.
    • Express: Um framework web para Node.js.
    • Mongoose: Uma biblioteca de modelagem de objetos MongoDB para Node.js.
    • Nodemailer: Uma biblioteca para envio de e-mails.
    • MongoDB: Um banco de dados NoSQL.

    Instruções de Uso

    Para executar o projeto em sua máquina, siga os passos abaixo:

    1. Clone este repositório:

      git clone https://github.com/Snarloff/sistema-agendamento.git
    2. Navegue até o diretório do projeto:

      cd sistema-agendamento
    3. Instale as dependências:

      npm install
    4. Inicie o servidor:

      node .

    A aplicação estará disponível em http://localhost:3000.

    Certifique-se de configurar corretamente a conexão com o MongoDB e as configurações do Nodemailer.

    Abaixo estão algumas prints do projeto:

    image image image

    Contribuições

    Este projeto é destinado a fins de estudo e não está mais em desenvolvimento ativo. Contribuições são bem-vindas, mas esteja ciente de que o projeto pode não atender a todos os padrões de produção.

    Problemas e Sugestões

    Se você encontrar problemas ou tiver sugestões para melhorar este projeto, por favor, abra uma issue neste repositório.

    Visit original content creator repository
  • slackerdb

    RobotSlacker
    Figure 1. RobotSlacker

    SlackerDB (DuckDB Postgres proxy)

    Quick Note

    This is an agile DuckDB extension that provides Java-based connectivity with network access and multiprocess support.

    What we can do:

    Based on the description, this extension can do following:

    • Enables remote access to DuckDB via TCP/IP from network, instead of local connect only restriction.

    • Supports multiprocess access to DuckDB instead of single-process restriction.

    • Provides PostgreSQL wire protocol compatibility (JDBC, ODBC, etc.), allowing DuckDB to serve as a PostgreSQL database.

    • Offers a dedicated access client, which has:

      1. Advanced features

      2. Comprehensive data dictionary access support

    • You can use COPY syntax to import data with high performence, this compatible with PG CopyManager.

    • Provides self-managed data services and API.

    • You have multiple ways to connect to this extension:

      • Directly connect to the server with jdbc, odbc, …​

      • Connect through a connection gateway which multiple servers(can on different host or different process) are behinds it.

      • Embed the compiled jar package into your own application.

      • Register a data service and access data through REST API.

    Usage

    Build from source:

        # make sure you have JDK17 and maven 3.6+ ready.
        # Download source code
        git clone ...
        # compile it
        cd slackerdb
        mvn clean compile package
    
        # All compiled results will be placed in the dist directory.
        #    compiled Jar packages,
        #    default configuration files.

    Start db server

        java -jar dbserver/target/slackerdb-dbserver-0.1.8-standalone.jar start

    Stop db server

        java -jar dbserver/target/slackerdb-dbserver-0.1.8-standalone.jar stop

    Check db status

        java -jar dbserver/target/slackerdb-dbserver-0.1.8-standalone.jar status

    Start db proxy

        java -jar dbproxy/target/slackerdb-dbproxy-0.1.8-standalone.jar start

    Stop db proxy

        java -jar dbproxy/target/slackerdb-dbproxy-0.1.8-standalone.jar stop

    Check proxy status

        java -jar dbproxy/target/slackerdb-dbproxy-0.1.8-standalone.jar status

    Server configuration file template

    # Database name, default is none.
    data=
    
    # Path where data files are saved
    # ":memory:" indicates in-memory mode (data will be lost after restart)
    data_dir=:memory:
    
    # Temporary files directory during operations
    # Disk mode:   Defaults to data_dir if not specified
    # Memory mode: Defaults to system temp directory if not specified
    # Recommended: Use high-performance storage for temp_dir
    temp_dir=
    
    # Directory for extension plugins
    # Default: $HOME/.duckdb/extensions
    extension_dir=
    
    # Run as background daemon (true/false)
    daemon=
    
    # PID file location
    # - Locks exclusively during server operation
    # - Startup aborts if file is locked by another process
    # - No file created if not configured, and no lock.
    pid=
    
    # Log output destinations (comma-separated)
    # "CONSOLE" for stdout, file paths for file logging
    log=CONSOLE,logs/slackerdb.log
    
    # Log verbosity level
    log_level=INFO
    
    # Main service port
    # 0 = random port assignment
    # -1 = disable network interface
    # Default: 0 (disabled)
    port=
    
    # Data service API port
    # 0 = random port assignment
    # -1 = disable interface (default)
    port_x=
    
    # Network binding address
    bind=0.0.0.0
    
    # Client connection idle timeout (seconds)
    client_timeout=600
    
    # External remote listener registry endpoint
    # Format: IP:PORT
    # Default: none (disabled)
    remote_listener=
    
    # Database opening mode.
    # Default: READ_WRITE
    access_mode=READ_WRITE
    
    # Maximum concurrent connections
    # Default: 256
    max_connections=
    
    # Maximum worker threads
    # Default: CPU core count
    max_workers=
    
    # Database engine threads
    # Default: 50% of CPU cores
    # Recommendation: 5-10GB RAM per thread
    threads=
    
    # Memory usage limit (K/M/G suffix)
    # Default: 60% of available memory
    # -1 = unlimited (memory mode only)
    memory_limit=
    
    # Database template file
    template=
    
    # Initialization script(s)
    # Executes only on first launch
    # Accepts: .sql file or directory
    init_script=
    
    # Startup script(s)
    # Executes on every launch
    # Accepts: .sql file or directory
    startup_script=
    
    # System locale
    # Default: OS setting
    locale=
    
    # SQL command history
    # ON = enable tracking
    # OFF = disable (default)
    sql_history=OFF
    
    # Minimum idle connections in pool
    connection_pool_minimum_idle=3
    
    # Maximum idle connections in pool
    connection_pool_maximum_idle=10
    
    # Connection lifetime (milliseconds)
    connection_pool_maximum_lifecycle_time=900000
    
    # Query result cache configuration (in bytes)
    # - Only caches API request results (JDBC queries unaffected)
    # - Default: 1GB (1073741824 bytes)
    # - Set to 0 to disable caching
    query_result_cache_size=
    
    # Data service schema initialization
    # - Accepts:
    #   * JSON file path (single schema)
    #   * Directory path (loads all *.service files)
    # - Schema files should contain service definitions in JSON format
    data_service_schema=

    Proxy configuration file template

    # PID file location
    # - Locks exclusively during server operation
    # - Startup aborts if file is locked by another process
    # - No file created if not configured, and no lock.
    pid=
    
    # Log output destinations (comma-separated)
    # "CONSOLE" for stdout, file paths for file logging
    log=CONSOLE,logs/slackerdb-proxy.log
    
    # Log level
    log_level=INFO
    
    # Run as background daemon (true/false)
    daemon=
    
    # Main service port
    # 0 = random port assignment
    # -1 = disable network interface
    # Default: 0 (disabled)
    port=0
    
    # Data service API port
    # 0 = random port assignment
    # -1 = disable interface (default)
    port_x=0
    
    # Network binding address
    bind=0.0.0.0
    
    # Client connection idle timeout (seconds)
    client_timeout=600
    
    # Maximum worker threads
    # Default: CPU core count
    max_workers=
    
    # System locale
    # Default: OS setting
    locale=

    Note: All parameters are optional.
    You can keep only the parameters you need to modify.
    For parameters that are not configured, default values will be used.

    Data Service

    • Data service work with port x, please make sure you have enabled it in server configuration or from command parameter. It’s important to note that we have no consider on data security. This means data services must work in a trusted environment.

    user login

    User login (note: this is optional). After success, a token will be provided.
    Context operations or SQL access that requires context variables will require token.
    If your program does not involve context feature, you can ignore this login.
    Put it simplify, the token is currently used as the user ID.

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/login

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "token": “yJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9”,
        "retMsg": "Login successful."
      }
    user logout

    User logout

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/logout

    headers:

    Attribute Value

    Authorization

    NzJjYjE3NmQtN2Y2ZC00OWMyLWIwODAtYTU1MDE3YzVmZDU1

    The token information here is obtained when call /login in earlier

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
      }
    set context

    set context

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/setContxt

    headers:

    Attribute Value

    Authorization

    NzJjYjE3NmQtN2Y2ZC00OWMyLWIwODAtYTU1MDE3YzVmZDU1

    The token information here is obtained when call /login in earlier

    request body:

    Attribute Value

    key1

    value1

    key2

    value2

    keyx

    valuex

    You can set one or more key-value pairs at once, or you can set multiple key-value pairs by calling setContext multiple times.

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
      }
    removeContext

    remove context

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/removeContxt

    headers:

    Attribute Value

    Authorization

    NzJjYjE3NmQtN2Y2ZC00OWMyLWIwODAtYTU1MDE3YzVmZDU1

    The token information here is obtained when call /login in earlier

    request body:

    Attribute Value

    removedKeyList

    [key1,key2, ….]

    You can remove one or more key-value pairs at once, or you can remove multiple key-value pairs by calling removeContext multiple times.

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
      }
    registerService

    register a service

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/registerService

    request body:

    Attribute Value

    serviceName

    service name

    serviceVersion

    service version

    serviceType

    service type, GET/POST

    searchPath

    sql default search path, Optional parameter

    sql

    SQL statement, can contain such ${var1} variable information

    description

    description

    snapshotLimit

    how long the query result will be cached, Optional parameter

    parameter

    parameter default value when query api not provide parameter value

    snapshotLimit format: 3 hours / 30 minutes / 45 seconds

    Request example:

      {
        "serviceName": "queryTest1",
        "serviceVersion": "1.0",
        "serviceType": "GET",
        "sql", "SELECT 1"
      }

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
      }
    unRegisterService

    unregister a service

    Attribute Value

    Protocol

    HTTP

    Method

    POST

    Path

    /api/unRegisterService

    request body:

    Attribute Value

    serviceName

    service name

    serviceVersion

    service version

    serviceType

    service type, GET/POST

    Request example:

      {
        "serviceName": "queryTest1",
        "serviceVersion": "1.0",
        "serviceType": "GET",
      }

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
      }
    listRegisteredService

    list all service

    Attribute Value

    Protocol

    HTTP

    Method

    GET

    Path

    /api/listRegisteredService

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
        "services":
          {
            "Query1":
            {
              "seviceName" : "Query1",
              "serviceType" : "GET",
              ....
            }
          }
      }
    /api/{apiVersion}/{apiName}

    API query

    Attribute Value

    Protocol

    HTTP

    Method

    POST or GET

    Path

    /api/{apiVersion}/{apiName}

    headers:

    Attribute Value

    Authorization

    NzJjYjE3NmQtN2Y2ZC00OWMyLWIwODAtYTU1MDE3YzVmZDU1

    snapshotLimit

    Optional. used to overwrite service definition. 0 means no result cache

    The token information here is obtained when call /login.
    The token is optional, if you use context in your sql statement, you must set it.
    snapshotLimit format: 3 hours / 30 minutes / 45 seconds

    GET Request example:

      GET /api/1.0/queryApi?context1=xxx&context2=yyy

    POST Request example:

      POST /api/1.0/queryApi
    
      {
        "context1": "xxx",
        "context2": "yyy",
      }

    Response example:

    Success response (200)
    
      {
        "retCode": 0,
        "retMsg": "Successful."
        "description" "test 1",
        "cached": false,
        "timestamp": 17777700,
        "data":
          {
            "columnNames":["col1","col2","col3"],
            "columnTypes":["INTEGER","INTEGER","VARCHAR"],
            "dataset":[[1,2,"中国"]]
          }
      }

    SQL REPL Server

    SQL REPL Server provides an asynchronous WebSocket interface for executing SQL queries and fetching results in pages. It is useful for long-running queries where you want to avoid blocking the client, and aligns with the MCP (Model Context Protocol) philosophy of using WebSocket for bidirectional communication.

    To use it, ensure the data service API port (port_x) is enabled in server configuration.

    WebSocket Endpoint

    Connect to the WebSocket endpoint:

    Attribute Value

    Protocol

    WebSocket (ws:// or wss://)

    Path

    /sql/ws

    Once connected, you can send JSON messages with the following general format:

    {
      "id": "unique-request-id",   // used to match responses
      "type": "message-type",      // one of: start, exec, fetch, cancel, close
      "data": { ... }              // payload specific to the message type
    }

    The server will respond with a JSON message that mirrors the request id and includes a retCode (0 for success, non‑zero for error) and relevant data.

    Session Management

    A session is created by sending a start message. The server returns a sessionId that must be included in subsequent messages for the same session.

    Start message:

    {
      "id": "1",
      "type": "start",
      "data": {}
    }

    Response:

    {
      "id": "1",
      "retCode": 0,
      "retMsg": "Session created",
      "sessionId": "session-123"
    }

    All further messages for this session must include "sessionId": "session-123" in their data field.

    Execute SQL (async)

    Submits a SQL statement for asynchronous execution. Returns a task ID that can be used to fetch results later.

    Message:

    {
      "id": "2",
      "type": "exec",
      "data": {
        "sessionId": "session-123",
        "sql": "SELECT * FROM large_table",
        "maxRows": 1000   // optional, default is 1000
      }
    }

    Response:

    {
      "id": "2",
      "retCode": 0,
      "retMsg": "Task submitted",
      "taskId": "550e8400-e29b-41d4-a716-446655440000",
      "status": "running"   // or "completed" if result fits in first page
    }

    If the SQL execution fails immediately (e.g., syntax error), the response will contain retCode != 0 and an error message.

    Fetch Results

    Retrieves a page of results for a given task ID. The endpoint returns a fixed number of rows (up to maxRows specified in execute) and indicates whether more rows are available.

    Message:

    {
      "id": "3",
      "type": "fetch",
      "data": {
        "sessionId": "session-123",
        "taskId": "550e8400-e29b-41d4-a716-446655440000",
        "maxRows": 100   // optional, overrides the default page size
      }
    }

    Response:

    {
      "id": "3",
      "retCode": 0,
      "retMsg": "Success",
      "taskId": "550e8400-e29b-41d4-a716-446655440000",
      "status": "completed",   // "running", "completed", or "error"
      "hasMore": false,        // true if there are more rows to fetch
      "columnNames": ["id", "name"],
      "columnTypes": ["INTEGER", "VARCHAR"],
      "dataset": [
        [1, "Alice"],
        [2, "Bob"]
      ]
    }

    If the task is still running (status = “running”), dataset may be empty and hasMore will be true. If the task has completed and all rows have been fetched, status becomes “completed” and hasMore false. If an error occurs during execution, status becomes “error” and retMsg contains the error details.

    You can send multiple fetch messages until hasMore becomes false. Each call returns the next page of rows.

    Cancel Task

    Cancels an ongoing SQL task. If no taskId is provided, cancels the current session’s active task.

    Message:

    {
      "id": "4",
      "type": "cancel",
      "data": {
        "sessionId": "session-123",
        "taskId": "550e8400-e29b-41d4-a716-446655440000"   // optional, omit to cancel the session's active task
      }
    }

    Response:

    {
      "id": "4",
      "retCode": 0,
      "retMsg": "Task cancelled"
    }
    Close Session

    Closes the session and releases all associated resources (including any pending tasks).

    Message:

    {
      "id": "5",
      "type": "close",
      "data": {
        "sessionId": "session-123"
      }
    }

    Response:

    {
      "id": "5",
      "retCode": 0,
      "retMsg": "Session closed"
    }

    After closing, the session ID is no longer valid.

    Example Workflow (using JavaScript/Node.js)
    1. Connect to WebSocket endpoint:

      const WebSocket = require('ws');
      const ws = new WebSocket('ws://localhost:8080/sql/ws');
    2. Create a session:

      ws.send(JSON.stringify({
        id: "1",
        type: "start",
        data: {}
      }));
    3. Execute a long query:

      ws.send(JSON.stringify({
        id: "2",
        type: "exec",
        data: {
          sessionId: "session-123",
          sql: "SELECT * FROM huge_table",
          maxRows: 500
        }
      }));
    4. Fetch first page:

      ws.send(JSON.stringify({
        id: "3",
        type: "fetch",
        data: {
          sessionId: "session-123",
          taskId: "abc123",
          maxRows: 100
        }
      }));
    5. Fetch subsequent pages until hasMore becomes false.

    6. Optionally cancel if needed:

      ws.send(JSON.stringify({
        id: "4",
        type: "cancel",
        data: {
          sessionId: "session-123",
          taskId: "abc123"
        }
      }));
    7. Close the session when done:

      ws.send(JSON.stringify({
        id: "5",
        type: "close",
        data: {
          sessionId: "session-123"
        }
      }));

    Note: The examples above assume you handle incoming messages asynchronously. In a real client you would match responses by their id.

    Basic Services

    The following HTTP endpoints provide basic administrative functions, such as backup, file upload/download, log viewing, and status monitoring. These endpoints are available when the Data Services API port (port_x) is enabled.

    /backup (POST)

    Performs a database backup. The backup file is saved in the backup/ directory with the naming pattern {database}_{backupTag}.db.

    Request body (JSON):
    {
      "backupTag": "mybackup"
    }
    Response example (success):
    {
      "retCode": 0,
      "retMsg": "Successful. Backup file has placed to [C:\\Work\\slackerdb\\backup\\mydb_mybackup.db]."
    }
    /download (GET)

    Downloads a file from the server’s working directory.

    Query parameters:
    • filename (required) – relative path to the file.

    Example:
    GET /download?filename=logs/slackerdb.log

    The endpoint supports Range requests for partial downloads.

    /upload (POST)

    Uploads a file to the server’s working directory.

    Query parameters:
    • filename (required) – target file path.

    Form data:
    • file (required) – the file to upload.

    Example using curl:
    curl -X POST -F "file=@local.txt" "http://localhost:8080/upload?filename=uploads/remote.txt"
    Response:
    "uploaded: C:\\Work\\slackerdb\\uploads\\remote.txt"
    /viewLog (GET)

    Retrieves the last N lines of a log file.

    Query parameters:
    • filename (required) – path to the log file.

    • lines (optional) – number of lines to return (default 100, max 10000).

    Example:
    GET /viewLog?filename=logs/slackerdb.log&lines=50
    Response (JSON array of strings):
    [
      "2025-12-04 10:00:00 INFO  Server started",
      "2025-12-04 10:00:01 INFO  Listening on port 8080"
    ]
    /status (GET)

    Returns comprehensive server status information, including server details, database metrics, configuration parameters, usage statistics, installed extensions, and active sessions.

    Example:
    GET /status
    Response (abbreviated):
    {
      "server": {
        "status": "RUNNING",
        "version": "0.1.8",
        "build": "2025-12-04 10:00:00.000 UTC",
        "pid": 12345,
        "now": "2025-12-04 15:20:00",
        "bootTime": "2025-12-04 10:00:00",
        "runTime": "5 hours, 20 minutes, 0 seconds"
      },
      "database": {
        "version": "v1.0.0",
        "size": "1.2 GB",
        "memoryUsage": "256 MB",
        "walSize": "0 B"
      },
      "parameters": {},
      "usage": {},
      "extensions": [],
      "sessions": []
    }

    Web-based SQL Console

    Console

    Note: The web console is not a production‑ready feature. It is a demonstration tool designed to help you understand and verify the backend services (Data Service, MCP Tool, and MCP Resource) in a visual, interactive way.

    SlackerDB includes a web‑based SQL console that provides an interactive interface for executing SQL queries, viewing results, and performing administrative tasks directly from your browser. In addition to the core SQL REPL, the console offers dedicated panels for managing Data Service, MCP Tool, and MCP Resource – three key extension mechanisms of the SlackerDB ecosystem.

    Access: After starting the server with the data‑service API port (port_x) enabled, open http://<server>:<port_x>/console.html in a modern browser.

    Features:

    • Interactive SQL Shell: Type SQL statements at the prompt and execute them with Enter. Command history is available with up/down arrows.

    • WebSocket Connection: The console automatically connects to the SQL REPL server via WebSocket (/sql/ws) and manages sessions.

    • Tabular Results: Query results are displayed as formatted tables with column names and types.

    • Sidebar Tools Panel: Provides quick access to:

      • Server Status – detailed modal with server metrics, database parameters, usage statistics, extensions, and active sessions.

      • Backup Database – modal to create a backup with optional download of the resulting file.

      • Log Viewer – modal to inspect server logs in real time.

    • Connection Indicator: Visual indicator shows whether the console is connected to the server.

    • Asynchronous Execution: Long‑running queries are executed asynchronously; results can be fetched in pages.

    • Data Service Management Panel: Register, list, load, save, and unregister data services. You can also download the service definitions as a JSON file (the download will prompt you to choose a save location via the browser’s file‑picker dialog).

    • MCP Tool Management Panel: List, load, save, and unregister MCP (Model Context Protocol) tools. Download the tool definitions as JSON (with file‑picker dialog).

    • MCP Resource Management Panel: List, load, save, and unregister MCP resources. Download the resource definitions as JSON (with file‑picker dialog).

    Usage:

    1. Ensure the server is running with port_x configured (e.g., port_x=8080).

    2. Navigate to http://localhost:8080/console.html.

    3. The console will attempt to connect automatically. Once connected, you can start typing SQL.

    4. Use the left‑sidebar buttons to switch between the Console, Data Service, MCP Tool, and MCP Resource views.

    5. In each management panel you can:

      • Refresh the list of registered items.

      • Select items with checkboxes and perform batch operations (unregister, download).

      • Load definitions from a JSON file.

      • Save definitions to the server’s configuration directory.

      • Download definitions as a JSON file – the browser will show a file‑save dialog, allowing you to choose the location and filename.

    6. Use the right‑sidebar buttons to open status, backup, or log modals.

    The console is built with plain HTML/JavaScript and requires no additional installation. It is intended for development, testing, and administrative purposes in trusted environments. Its primary goal is to demonstrate how the backend services work and to provide a convenient way to experiment with the Data Service, MCP Tool, and MCP Resource subsystems.

    Embed the db server in your code

      // create configuration,  and update as your need
      ServerConfiguration serverConfiguration = new ServerConfiguration();
      serverConfiguration1.setPort(4309);
      serverConfiguration1.setData("data1");
    
      // init database
      DBInstance dbInstance= new DBInstance(serverConfiguration1);
    
      // startup database
      dbInstance1.start();
    
      // shutdown database
      dbInstance.stop();
    
      // We currently supports starting multiple instances running at the same time.
      // But each instance must have his own port and instance name.

    Embed the db proxy in your code

        ServerConfiguration proxyConfiguration = new ServerConfiguration();
        proxyConfiguration.setPort(dbPort);
        ProxyInstance proxyInstance = new ProxyInstance(proxyConfiguration);
        proxyInstance.start();
    
        // Waiting for server ready
        while (!proxyInstance.instanceState.equalsIgnoreCase("RUNNING")) {
            Sleeper.sleep(1000);
        }

    Jdbc program example with postgres client

        // "db1" is your database name in your configuration file.
        // 3175  is your database port in your configuration file.
        // If you are connecting for the first time, there will be no other users except the default main
        String  connectURL = "jdbc:postgresql://127.0.0.1:3175/db1";
        Connection pgConn = DriverManager.getConnection(connectURL, "main", "");
        pgConn.setAutoCommit(false);
    
        // .... Now you can execute your business logic.

    Jdbc program example with slackerdb client

        // "db1" is your database name in your configuration file.
        // 3175  is your database port in your configuration file.
        // If you are connecting for the first time, there will be no other users except the default main
        String  connectURL = "jdbc:slackerdb://127.0.0.1:3175/db1";
        Connection pgConn = DriverManager.getConnection(connectURL, "main", "");
        pgConn.setAutoCommit(false);
    
        // .... Now you can execute your business logic.

    Odbc and python program

        It also supports ODBC and Python connection.

    Use IDE tools to connect to the database

    Since native Postgres clients often use some data dictionary information that duckdb doesn’t have,
    We do not recommend that you use the PG client to connect to this database(That works, but has limited functionality).
    Instead, we suggest use the dedicated client provided in this project.

    Dbeaver:
    Database -> Driver Manager -> New
       Database type: generic database.
       Class name:    org.slackerdb.jdbc.Driver
       URL template:  jdbc:slackerdb://{host}:{port}/[{database}]

    Data Security

    Based on DuckDB’s secure encryption mechanism, we support data encryption.

    To use this feature, you need to:

    1. Set the parameter data_encrypt to true.

    2. Set the database key through any of the following three methods:

      1. Environment variable: SLACKERDB_<dbName>_KEY (where <dbName> is replaced with the name of your database instance).

      2. Java property: Specify it when starting the program as -DSLACKERDB_<dbName>_KEY=…​.

      3. After startup, specify it via the statement: ALTER DATABASE <dbName> SET Encrypt Key <key>.

    Known Issues

    1. User and password authorization

    We do not support user password authentication, just for compatibility, keep these two options.
    you can fill anything as you like, it doesn’t make sense.

    2. Limited support for duckdb datatype

    Only some duckdb data types are supported, mainly simple types, such as int, number, double, varchar, … For complex types, some are still under development, and some are not supported by the PG protocol, such as blob, list, map… You can refer to sanity01.java to see what we currently support.

    3. postgresql-fdw

    fdw will use “Declare CURSOR” to fetch remote data, while duck doesn’t support this.

    Roadmap

    Visit original content creator repository
  • k8s-workload-identity

    Workload Identity Example

    In this example we demonstrate how to create a Kubernetes cluster and to bind a GCP service account to a GKE service account giving your Kubernetes service account GCP level permissions.

    Introduction

    What is Workload Identity?

    Workload identity is a way to securely provide access to Google cloud services within your Kubernetes cluster.

    This is accomplished by creating a Google cloud service account, then granting the roles and/or permissions required. Now you create a Kubernetes service account and you add the annotation to the service account referencing the GCP service account which has the required roles and/or permissions to be able to access the Google cloud services within your cluster.

    Example Architecture

    The following diagrams will demonstrate workload identity and how it binds a GCP service account with Pub/Sub permissions to a GKE service account allowing the pod running your application to have the correct GCP permissions outside your Kubernetes cluster to access Pub/Sub correctly.

    Setup

    In this repository there are several key things that must be mentioned.

    Firstly, we’ve created a Google cloud service account for our terraform user which has access to create all the required resources within our Google project. This will need to be done before being able to use this repository. To be able to set this account up I’d highly recommend following the official Google documentation provided HERE.

    Assuming you have your terraform service account setup correctly, you should now have the ability to run the terraform init command and it will create the relevant state file in your remote state location – ideally stored in Google cloud storage.

    Note: Setting up the remote state should have been done as part of the Google documentation.

    The following are some basic instructions to be able to create the terraform resources we’ve created in this project.

    1. Update the _backend.tf and _providers.tf files and ensure the credentials location matches where you have stored your terraform service account json file.
    2. Apply the terraform IaC to your Google cloud project using the following command:
    $ terraform apply -auto-approve

    Note: You may wish to first see what resources will be created in your project before applying this IaC, this can be done using the following command:

    $ terraform plan

    Congratulations! you have successfully started a Kubernetes cluster with workload identity working.

    Test

    This will confirm that our GCP and GKE service accounts have been bound correctly and our GKE service account can indeed use IAM permissions bound to the GCP service account.

    Firstly, you will need to authenticate your kubectl command, this can be done using the following commands:

    $ gcloud auth login
    $ gcloud container clusters get-credentials default-cluster

    Run the following command in your termimal:

    $ kubectl run -it \
      --generator=run-pod/v1 \
      --image google/cloud-sdk \
      --serviceaccount gke-sa \
      --namespace workload-identity \
      workload-identity-test

    This will deploy a pod with a image that contains the google cloud SDK, it will use the GKE service account to deploy the pod – Once the pod is healthy it will become an intereractive shell that we can begin testing permissions from within the container running in the pod.

    Once the shell is available you should be able to run the following command:

    $ gcloud auth list

    Running gcloud auth list will result in displaying the email addresses that can be used to auth. If this is successful you should see something like the following output in your terminal:

    ACTIVE  ACCOUNT
    *       gcp-sa@gke-terraform-cluster-demo.iam.gserviceaccount.com

    Visit original content creator repository

  • MixDQ

    drawing MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization

    pub arxiv Project Page Huggingface

    News

    This repo contains the official code of our ECCV2024 paper: MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization

    We design MixDQ, a mixed-precision quantization framework that successfully tackles the challenging few-step text-to-image diffusion model quantization. With negligible visual quality degradation and content change, MixDQ could achieve W4A8, with equivalent 3.4x memory compression and 1.5x latency speedup.

    • 🤗 Open-Source Huggingface Pipeline 🤗: We implement efficient INT8 GPU kernel to achieve actual GPU acceleration (1.45x) and memory savings (2x) for W8A8. The pipeline is released at: https://huggingface.co/nics-efc/MixDQ. It could be easily implemented with just a few lines of code.

    • Huggingface Open-source CUDA Kernels: We provide open-sourced CUDA kernels for practical hardware savings in ./kernels, for more details for the CUDA development, please refer to the ./kernels/README.md

      • Memory Savings

        Memory Cost (MB) Static (Weight) Dynamic (Act) Peak Memory
        FP16 version 4998 240.88 5239
        Quantized version 2575 55.77 2631
        Savings 1.94x 4.36x 1.99x
      • Latency Speedup

        UNet Latency (ms) RTX3090 RT4080 A100
        FP16 version 43.6 36.1 30.7
        Quantized version 34.2 24.9 28.8
        Speedup 1.27x 1.45x 1.07x
    • For more information, please refer to our Project Page: https://a-suozhang.xyz/mixdq.github.io/

    Usage

    EnvSetup

    We recommend using conda for environment management.

    cd quant_utils
    conda env create -f environment.yml
    conda activate mixdq
    python -m pip install --upgrade --user ortools
    pip install accelerate
    

    Data Preparation

    The stable diffusion checkpoints are automatically downloaded with the diffusers pipeline, we also provide manual download scripts in ./scripts/utils/download_huggingface_model.py. For text-to-image generation on COCO annotations, we provide the captions_val2014.json with Google Drive, please put it in the ./scripts/utils.

    0. FP text-to-image generation

    Run the main_fp_infer.sh to generate images based on coco annotation or given prompt. (When deleting --prompt xxx, using coco annotations as the default prompts.) The images could be found in $BASE_PATH/generated_images.

    ## SDXL_Turbo FP Inference
    config_name='sdxl_turbo.yaml' # quant config, but only model names are used
    BASE_PATH='./logs/debug_fp'   # save image path
    
    CUDA_VISIBLE_DEVICES=$1 python scripts/txt2img.py \
    		--config ./configs/stable-diffusion/$config_name \
    		--base_path $BASE_PATH --batch_size 2 --num_imgs 8  --prompt  "a vanilla and chocolate mixing icecream cone, ice background" \
    		--fp16
    

    1. Normal Quantization

    We provide the shell script main.sh for the whole quantization process. The quantization process consists of 3 steps: (1) generating the calibration data. (2) conduct PTQ process. (3) conduct quantized model inference. We also provide the scripts for each of the 3 processes (main_calib_data.sh,main_ptq.sh,main_quant_infer.sh). You could run the main.sh to finish the whole quantization process, or run three steps respectively.

    1.1 Generate Calibration Data

    Run the main_calib_data.sh $GPU_ID to generate the FP activation calibration data. The output path of calib data is specified in the quant config.yaml. the --save_image_path saves the FP generated reference images. (We provide the pre-generated calib data at Google Drive, you could replace it with "/share/public/diffusion_quant/calib_dataset/bs1024_t1_sdxl.pt" in mixdq_open_source/MixDQ/configs/stable-diffusion/sdxl_turbo.yaml. Noted that the calib_data in the google drive contains 1024 samples, so you may increase the n_samples in the sdxl_turbo.yaml up to 1024.)

    CUDA_VISIBLE_DEVICES=$1 python scripts/gen_calib_data.py --config ./configs/stable-diffusion/$config_name --save_image_path ./debug_imgs
    

    1.2 Post Training Quantization (PTQ) Process

    Run the main_ptq.sh $LOG_NAME $GPU_ID to conduct PTQ to determine quant parameters, the quant parameters are saved as ckpt.pth in the log path. (We provide the ckpt.pth quant_params checkpoint for sdxl_turbo at Google Drive, you may put it under the ./logs/$log_name folder. It contains the quant_parmas for 2/4/8 bit, so you could use it with differnt mixed-precision configurations. )

    CUDA_VISIBLE_DEVICES=$2 python scripts/ptq.py --config ./configs/stable-diffusion/${cfg_name} --outdir ./logs/$1 --seed 42
    

    1.3 Inference Quantized Model

    1.3.1 Normal

    We provide the quantized inference example in the latter part of main.sh, and the main_quant_infer.sh (the commented part). The --num_imgs denotes how many images to generate, when no --prompt is given, the COCO annotations are used as the default prompts. By default, 1 image is generated for each prompt. When a user-defined prompt is given, “#num_imgs” of images are generated following this prompt.

    CUDA_VISIBLE_DEVICES=$1 python scripts/quant_txt2img.py --base_path $CKPT_PATH --batch_size 2 --num_imgs 8 
    

    1.3.2 Mixed Precision

    For simplicity, we provide MixDQ acquired mixed precision configurations in ./mixed_precision_scripts/mixed_percision_config/sdxl_turbo/final_config/, the example of mixed precision inference is shown in main_quant_infer.sh. The “act protect” represents layers that are preserved as FP16. (It’s also worth noting that the mixed_precision_scripts/quant_inference_mp.py are used for mixed precision search, for infering the mixed precision quant model, use scripts/quant_txt2img.py)

    # Mixed Precision Quant Inference
    WEIGHT_MP_CFG="./mixed_precision_scripts/mixed_percision_config/sdxl_turbo/final_config/weight/weight_8.00.yaml"  # [weight_5.02.yaml, weight_8.00.yaml]
    ACT_MP_CFG="./mixed_precision_scripts/mixed_percision_config/sdxl_turbo/final_config/act/act_7.77.yaml "
    ACT_PROTECT="./mixed_precision_scripts/mixed_percision_config/sdxl_turbo/final_config/act/act_sensitivie_a8_1%.pt"
    
    CUDA_VISIBLE_DEVICES=$1 python scripts/quant_txt2img.py \
      --base_path $CKPT_PATH --batch_size 2 --num_imgs 8  --prompt"a vanilla and chocolate mixing icecream cone, ice background"\
      --config_weight_mp $WEIGHT_MP_CFG \
      --config_act_mp  $ACT_MP_CFG \
      --act_protect  $ACT_PROTECT \
      --fp16
    

    Using the example prompt, the generated W8A8 with mixed precision should be like:

    3. Mixed Precision Search

    Please download the util_files from Google Drive, and unzip it in the repository root directory. Please refer to the ./mixed_precision_scripts/mixed_precision_search.md for detailed process of the mixed precision search process.

    Acknowledgements

    Our code is developed based on Q-Diffusion, and the Diffusers Libraray.

    Citation

    If you find our work helpful, please consider citing:

    @misc{zhao2024mixdq,
          title={MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization}, 
          author={Tianchen Zhao and Xuefei Ning and Tongcheng Fang and Enshu Liu and Guyue Huang and Zinan Lin and Shengen Yan and Guohao Dai and Yu Wang},
          year={2024},
          eprint={2405.17873},
          archivePrefix={arXiv},
          primaryClass={cs.CV}
        }
    

    TODOs

    • the evaluation scripts (FID, ClipScore, ImageReward)
    • the efficient INT8 GPU kernels implementation

    Contact

    If you have any questions, feel free to contact:

    Tianchen Zhao: suozhang1998@gmail.com

    Xuefei Ning: foxdoraame@gmail.com

    Visit original content creator repository
  • zof

    zof: OpenFlow Micro-Framework

    MIT licensed Build Status codecov.io

    zof is a Python framework for creating asyncio-based applications that control the network using the OpenFlow protocol.

    Supported Features

    • OpenFlow versions 1.0 – 1.4 (with partial support for 1.5)
    • TLS connections
    • Limited packet parsing and generation: ARP, LLDP, IPv4, IPv6, UDP, TCP, ICMPv4, ICMPv6
    • App’s can simulate switches; Supports both sides of the OpenFlow protocol

    Requirements

    • Python 3.5.1 or later
    • oftr command line tool

    Install – Linux

    # Install /usr/bin/oftr dependency.
    sudo add-apt-repository ppa:byllyfish/oftr
    sudo apt-get update
    sudo apt-get install oftr
    
    # Create virtual environment and install latest zof.
    python3.5 -m venv myenv
    source myenv/bin/activate
    pip install zof

    The oftr command line tool can also be installed on a Raspberry PI or using HomeBrew.

    Demos

    To run the layer2 controller demo:

    python -m zof.demo.layer2
    

    Architecture

    zof uses a separate oftr process to terminate OpenFlow connections and translate OpenFlow messages to JSON.

    Architecture diagram

    Architecture: The oftr process translates OpenFlow to JSON.

    You construct OpenFlow messages via YAML strings or Python dictionaries. Incoming OpenFlow messages are generic Python objects. Special OpenFlow constants such as ‘NO_BUFFER’ appear as strings.

    type: FLOW_MOD
    msg:
      command: ADD
      match:
        - field: IN_PORT
          value: 1
        - field: ETH_DST
          value: 00:00:00:00:00:01
      instructions:
        - instruction: APPLY_ACTIONS
          actions:
            - action: OUTPUT
              port_no: 2

    The basic building block of zof is an app. An app is associated with various message and event handlers. You create an app object using the zof.Application class. Then, you associate handlers using the app’s message decorator.

    import zof
    
    APP = zof.Application('app_name_here')
    
    @APP.message('packet_in')
    def packet_in(event):
        APP.logger.info('packet_in message %r', event)
    
    @APP.message(any)
    def other(event):
        APP.logger.info('other message %r', event)
    
    if __name__ == '__main__':
        zof.run()

    Place the above text in a file named demo.py and run it with python demo.py. This app handles OpenFlow ‘PACKET_IN’ messages using the packet_in function. All other messages are dispatched to the other function. The app does not do anything; it just logs events.

    To compose the demo.py program with the layer2 demo:

    python demo.py --x-modules=zof.demo.layer2
    
    Visit original content creator repository
  • data-scores-in-the-uk

    Data Scores in the UK

    Synopsis

    License: GPL v3

    As part of our project ‘Data Scores as
    Governance’ we have developed a tool to map and investigate the uses of data
    analytics and algorithms in public services in the UK. Little is still known
    about the implementation of data-driven systems and algorithmic processes in
    public services and how citizens are increasingly ‘scored’ based on the
    collection and combination of data.

    This repository handles all aspects of the data collection and analysis.

    Contents

    Installation

    Requirements

    All development happened on Linux and the code should work well on MacOS as
    well. Windows support was never tested.

    Installation

    See installation instructions for boot and the download section of nodejs to install the required tools. With NodeJS installed type the following to install all dependencies:

    npm install
    

    Data Processes

    The Sugarcube tool set is used for all data acquisition processes. Those processes can be repeated at any time.

    bin/import-ddg-scrapes.sh

    The initial data scrape from DuckDuckGo was done outside of this repository, but still using Sugarcube. This script imports extracts the contents of the search results of the initial data set and imports it into the database. The imported data can be found in materials/ddg-scrapes-clean.csv.

    bin/search-ddg-gov.sh

    Scrape DuckDuckGo for search results on government websites (site:.gov.uk) based on the initial set of search queries.

    bin/search-ddg-auxiliary.sh

    Scrape DuckDuckGo for search results for auxiliary websites based on the initial set of search queries. The list of auxiliary domains is maintained in queries/aux-sites.txt.

    bin/import-foi-requests.sh

    Import the contents of the FOI requests into the database. The requests themselves are found in materials/foi-requests.

    bin/search-media-ddg.sh

    Scrape DuckDuckGo for articles from media websites. This scripts works slightly different due to the amount of possible scrapes. Those scrape need to run on multiple servers in parallel to reduce the time it takes to scrape.

    • Use the ./scripts/british_newspapers.clj to create the list of media domains.

    • Split the domains into chunks. On every server run (adopt -n r/4 to the right amount of servers).

      split -n r/4 –additional-suffix=.txt british-papers-domains.txt papers-

    • Start the scrape on each server:

      ./bin//search-media-ddg.sh queries/papers-aa.txt

    Scripts

    Post processing of data is done using a the following collection of scripts. They are idempotent and can be rerun at any time.

    scripts/update_companies.clj

    Tag all documents in the data base mentioning any company that is defined in queries/companies.txt.

    scripts/update_systems.clj

    Tag all documents in the data base mentioning any system that is defined in queries/systems.txt.

    scripts/update_authorities.clj

    Tag all documents in the data base mentioning any authority name in combination with any company or system. The lists of data are defined in queries/authorities.txt, queries/companies.txt and queries/systems.txt. This script also matches authority locations that are managed in [queries/coordinates.json]. If any location is missing, the script will halt. Add to the list of known coordinates to continue.

    scripts/update_departments.clj

    Tag all documents in the data base mentioning any department name in combination with any company or system. The lists of data are defined in queries/departments.txt, queries/companies.txt and queries/systems.txt.

    scripts/update_blacklisted.clj

    Flag a set of documents as blacklisted. They will be excluded from any further analysis or by the data-scores-map application. The list of blacklisted ID’s is collected in [queries/blacklist.txt].

    scripts/generate-media-sites-queries.js

    scripts/stats_for_companies.clj

    Generate statistics about the occurrences of companies in the existing data set. It will print a sorted CSV data set to the screen.

    scripts/stats_for_systems.clj

    Generate statistics about the occurrences of systems in the existing data set. It will print a sorted CSV data set to the screen.

    scripts/stats_for_departments.clj

    Generate statistics about the occurrences of departments in the existing data set. It will print a sorted CSV data set to the screen.

    scripts/stats_for_authorities.clj

    Generate statistics about the occurrences of authorities in the existing data set. It will print a sorted CSV data set to the screen.

    scripts/clean_gov_uk_domain_names.clj

    scripts/british_newspapers.clj

    This script scrapes a list of all news media from https://www.britishpapers.co.uk/. The resulting newspaper domains are printed to the screen. Use the script like this:

    ./scripts/british_newspapers.clj | tee ./queries/british-papers-domains.txt
    

    scripts/reindex_data.clj

    This script is a helper to create a new local index and reindex an existing data set. This was helpful during development to be able to experiment on a data set. Run the script like this:

    ./scripts/reindex_data.clj http://localhost:9200/data-scores-04 http://localhost:9200/data-scores-05
    

    Bits and Pieces

    I used Libreoffice to remove line breaks in the original rawresults.csv and
    exported the file again in materials/ddg-scraoes.csv. Then I did more
    cleaning using the following command:

    cat ddg-scrapes.csv | grep -v "^NO" | grep -v "Noresults" | cut -d, -f2- | sed -En "s/_(.*)-(.*)_100taps.json/\1 \2/p" | (echo "search_category,title,description,href" && cat) > ddg-scrapes-clean.csv
    

    Visit original content creator repository

  • mandelbrot

    Mandelbrot by Jort, Mandelbrot set viewer written in Rust

    __  __                 _      _ _               _   
    |  \/  |               | |    | | |             | |  
    | \  / | __ _ _ __   __| | ___| | |__  _ __ ___ | |_ 
    | |\/| |/ _` | '_ \ / _` |/ _ \ | '_ \| '__/ _ \| __|
    | |  | | (_| | | | | (_| |  __/ | |_) | | | (_) | |_ 
    |_|  |_|\__,_|_| |_|\__,_|\___|_|_.__/|_|  \___/ \__|
       __             __         __ 
      / /  __ __  __ / /__  ____/ /_
     / _ \/ // / / // / _ \/ __/ __/
    /_.__/\_, /  \___/\___/_/  \__/ 
         /___/                      v1.4
    


    Running

    On Linux:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    

    2. Run Mandelbrot by Jort using cargo run in the root of the repository

    cargo run --release
    

    Usage

        
    __  __                 _      _ _               _   
    |  \/  |               | |    | | |             | |  
    | \  / | __ _ _ __   __| | ___| | |__  _ __ ___ | |_ 
    | |\/| |/ _` | '_ \ / _` |/ _ \ | '_ \| '__/ _ \| __|
    | |  | | (_| | | | | (_| |  __/ | |_) | | | (_) | |_ 
    |_|  |_|\__,_|_| |_|\__,_|\___|_|_.__/|_|  \___/ \__|
       __             __         __ 
      / /  __ __  __ / /__  ____/ /_
     / _ \/ // / / // / _ \/ __/ __/
    /_.__/\_, /  \___/\___/_/  \__/ 
         /___/                      v1.4
    
    
    Run Mandelbrot using:
    	cargo run --release -- <width> <height> <max_iterations> <supersampling_amount> <window_scale>
    where <arg> means substitute with the value of arg
    use '-' to use the default value of arg
    
    KeyBindings {
        Up -> Move up translation_amount pixels,
        Down -> Move down translation_amount pixels,
        Left -> Move left translation_amount pixels,
        Right -> Move right translation_amount pixels,
        R -> Reset the Mandelbrot set view to the starting view,
        NumPadPlus -> Increment translation_amount,
        NumPadMinus -> Decrement translation amount,
        NumPadAsterisk -> Increment scale_numerator,
        NumPadSlash -> Decrement scale_numerator,
        LeftBracket -> Scale the view by scaling_factor, effectively zooming in,
        RightBracket -> Scale the view by inverse_scaling_factor, effectively zooming out,
        V -> Prints the current Mandelbrot set view; the center and scale,
        Key1 -> Renders VIEW_1,
        Key2 -> Renders VIEW_2,
        Key3 -> Renders VIEW_3,
        Key4 -> Renders VIEW_4,
        Key5 -> Renders VIEW_5,
        Key6 -> Renders VIEW_6,
        Key7 -> Renders VIEW_7,
        Key8 -> Renders VIEW_8,
        Key9 -> Renders VIEW_9,
        Key0 -> Renders VIEW_0,
        K -> Prints the keybindings,
        S -> Saves the current Mandelbrot set view as an image in the saved folder,
        I -> Manually input a Mandelbrot set view,
        A -> Pick an algorithm to color the Mandelbrot set view,
        M -> Change the Mandelbrot set view max_iterations,
        O -> Change the Mandelbrot set view color channel mapping, xyz -> RGB, where x,y,z ∈ {{'R','G','B'}} (case-insensitive),
        Q -> Change the window and image quality of the Mandelbrot set rendering by setting the SSAA multiplier, clamped from 1x to 64x,
        X -> Change the image quality of the Mandelbrot set rendering by setting the SSAA multiplier, clamped from 1x to 64x,
        C -> Prints the configuration variables,
    }
    
    

    Controls

    Keys Action
    , , , Move up, down, left, or right
    R Reset the Mandelbrot set view to the starting view
    [ Zoom in
    ] Zoom out
    V Prints the current Mandelbrot set view; the center and scale
    0, …, 9 Render a preconfigured view
    K Print the keybindings
    S Saves the current Mandelbrot set view as an image
    I Manually input a Mandelbrot set view
    A Pick an algorithm to color the Mandelbrot set view
    M Change the Mandelbrot set view max_iterations
    O Change the Mandelbrot set view color channel mapping
    Q Change the window and image quality of the Mandelbrot set rendering by setting the SSAA multiplier
    X Change the image quality of the Mandelbrot set rendering by setting the SSAA multiplier
    ESC, CTRL+C Exit

    Wallpapers

    3440×1440

    2023-06-26_12-48-59 189737_UTC 2023-06-26_12-35-03 632016400_UTC 2023-06-26_15-36-32 408135800_UTC 2023-06-26_15-25-34 763976800_UTC 2023-06-26_12-56-26 498671100_UTC 2023-06-26_08-54-45 849997300_UTC

    2560×1600

    2023-06-24 18:50:23 842186828 UTC 2023-06-29_11-45-50 778868321_UTC 2023-06-29_11-46-06 821888383_UTC 2023-06-29_11-48-21 628194507_UTC 2023-07-01_15-51-57 214248342_UTC

    1170×2532 (Iphone 13)

    2023-06-24 19:29:53 436765840 UTC

    Benchmarks

    Visit original content creator repository
  • vfdctl

    Vfdctl Use Case

    Vfdctl was built so that when programmatic interaction is needed with a VFD, you do not need to be aware of the specific I/O, registers or even protocol of your device in order to read or write to it. Instead, a user wanting to update a setting needs to send a command with the knowledge of the device name, the value requested and a parameter name that is consistent across every single VFD device. A user wanting the runtime telemetry just needs to specify topics they care to receive updates for in an MQTT client.

    With this decoupling, there are numerous tangible benefits. While running Vfdctl, if a VFD is replaced with a different brand or changes to a different protocol it does not force your controllers, HMI or web app to be updated also. So in this scenario if the VFD is not working after you’ve replaced it, you can safely assume that the issue occurred while performing the setup of the new VFD because you did not have to re-download to all of your devices with potentially unintended changes. During system downtime, minimizing complexity is key to ensuring the quickest recovery.

    Another advantage is that you are not tied to a single brand, make or model of VFD. For consistency, most users will opt to use the same VFD when using multiple in their system, but what if that device isn’t available for another 20 weeks during a new project? What if this is a 20 year old system and that device is end of life, so there is no drop-in replacement? Imagine if we only needed to change a single config file for our Vfdctl app with the new brand and desired protocol of the new hardware and we didn’t need to download to every 20 year old device on the network! This is the role that Vfdctl serves.

    This page describes what’s possible to build with the different tools and APIs that are available on the platform, and how to get the access and information that you need to get started.

    How do I get started with Vfdctl?

    Hardware requirements

    Across the platform, there are different controller devices capable of various protocols. This section helps to describe what you need to successfully use our APIs.
    In addition to the controllers below, you will need tools and cables to communicate with them such as ethernet cables, micro usb cables, power cables and a micro SD card (1GB or greater).

    Supported Controllers

    Make Model MQTT Support Modbus Support
    Automation Direct P1AM-100ETH w/ MKR485 3.1.1 RTU

    API Documentation

    APIs

    API Vfdctl Version MQTT Version Supported Requires Cloud Connectivity? R/W
    Runtime telemetry v0.1.0beta 3.1.1 No Read Only
    Settings Configuration v0.1.0beta 3.1.1 No Write
    Last Will & Testament v0.3.0 3.1.1 No Read Only

    Get started with the Vfdctl APIs

    To get started using the Vfdctl endpoints there are a few steps of setup before the endpoints will be monitored.

    Setup the connected devices

    1. Prepare the configuration file
    2. Download a copy of the config.txt from the application’s repo on GitHub to use as a template
    3. Place the config.txt at the root of a micro SD card
    4. Setup an MQTT Broker (via Docker)
    5. Install Docker
    6. Ensure the Docker daemon is running the Docker desktop app
    7. Open a terminal / command prompt
    8. Run the command docker run -it --name mqtt --restart unless-stopped -p 1883:1883 eclipse-mosquitto mosquitto -c /mosquitto-no-auth.conf
    9. Find the ip address of your device, this is the IP address of your MQTT Broker
      Within config.txt update the broker_url to the IP address of your MQTT Broker

    Network the devices

    1. Using a managed switch or router, connect the controller via ethernet
    2. On the same network, connect your machine running the MQTT broker
    3. Assign a static IP address to the controller on the same subnet as your MQTT Broker

    Set up the Vfdctl controller device

    P1AM-100ETH

    1. Load the image onto the P1AM-100ETH
    2. Power up the P1AM-100ETH controller via 24V
    3. Clone the git repo to a machine running Arduino IDE
    4. Connect to the P1AM-100ETH controller via micro usb on the machine
    5. Open the app.ino project in Arduino IDE
    6. Select your connected device and download the app to the controller
    7. Load the configuration onto the controller
    8. Within config.txt update the device_mac to your controller’s ethernet MAC address (this can be usually be found on a sticker on the controller or its packaging)
    9. Ensure config.txt is located at the root of the micro SD card
    10. Insert the micro SD card into the controller
    11. Power cycle the controller

    Monitoring Telemetry and Sending Commands

    Vfdctl’s public communications uses MQTT for flexibly communicating to an IOT ingestion engine. Any MQTT 3.1.1 capable client on the network is able to speak to the Vfdctl app. For ease of use, I recommend installing MQTT Explorer to browse topics, publish commands and setup basic data traces. For those unfamiliar with MQTT there is reference material shared within the “Important Resources” section.

    Monitoring device telemetry

    Telemetry is reported as a read-only stream on a topic beginning with dt (device telemetry). It is up to the topic’s consumer to determine the most appropriate way to handle subscribing to topics.
    Telemetry topics follow the pattern below:
    dt/[app]/[device]/[parameter-measured]
    As an example, the topic below is expected to report the values of the current drawn by VFD 1:

    dt/vfdctl/vfd1/amps

    Some possible examples of determining your subscription are:

    dt/vfdctl/vfd1/+

    You are creating an engineering dashboard page per device for a web app, you only care about the currently selected “vfd1” device and want to display all messages in a historian table view
    dt/vfdtl/vfd1/amps ; you’re now on revision 2 of your engineering dashboard page and have added React components with a line chart where you want to display current draw specifically

    dt/vfdctl/+/amps

    You want to use ai to predict your billing costs, so you create a trend of mean current draw across all devices on the network by setting up a listener in Node Red that every minute will reset its value, sum every incoming value and at the end of the next interval divide by the number of samples received before publishing the mean on a new topic called dt/facility1/vfd/amps and inserting the data into a database ; a conveyor you are driving can bind if it gets too worn, so you implement a Node Red listener to send a text message to a technician that VFD1’s conveyor needs maintenance performed before irreversible damage occurs when the value passes an alarm setpoint

    Publishing commands

    Commands are requests for action to occur on another client.
    Command topics follow the pattern below:

    cmd/[app]/[device]/[parameter-name]/[request-type]

    As an example, the topic below contains a request to configure the acceleration time for VFD1:

    cmd/vfdctl/vfd1/acceltime/config

    The message body should contain a json object with a “value” key and an integer value:

    {"value":40}
    

    Publishing the sample object to the sample topic is expected to set the acceleration time of VFD1 to 40 seconds.
    Example of reading telemetry and publishing commands

    Configuring Parameters

    Controller Configuration

    The controller’s config file contains settings for the controller hardware connection, vfd hardware and MQTT topic mapping. This file is plain text and any notepad or command line software can edit it easily.
    After making edits, see “Setting up the Vfdctl controller device” for load instructions.
    Example of the config.txt, being viewed with VSCode.

    Important Resources

    Platform updates

    Please make sure to bookmark the following pages, as they will provide you with important details on API outages, updates, and other news relevant to developers on the platform.
    Check out our releases to see find the latest or a history of all released binaries
    Visit the Tulsa Software Repositories to find related open-source codebases
    In case of any document conflicts, assume the Vfdctl readme is most accurate

    MQTT reference material

    MQTT is extremely flexible and not as rigidly enforced as communications such as RPC or REST communications. This leaves lots of room for one-off implementations if systems are not intentionally designed. For our communications architecture, we have based our topic and payload structures off of the AWS best practices to ensure ultimate usability.

    • Even if not using AWS IoT cloud communications, their MQTT communication patterns are widely applicable
    • A good client GUI like MQTT Explorer makes troubleshooting your system much easier but any app supporting MQTT 3.1.1 communications will suffice
    • Using a IoT Rules Engine such as Node Red, AWS IoT Core, etc is essential to utilizing data with minimal coupling and maximal scalability

    Still have questions?

    For the questions not covered here or to learn what we’re currently working on, visit the Tulsa Software slack channel to review topics, ask questions, and learn from others.

    Visit original content creator repository

  • Linksy

    Linksy – Open Source Bookmark/Link Manager

    Linksy is an open-source bookmark and link manager that enables users to organize and manage their social media posts and other links. It allows for the creation of folders to store links, the ability to share collections with others, and a search feature to quickly find links. With its seamless integration with various social media platforms, Linksy provides an easy and efficient way to manage links.

    Features

    • Create folders and organize links from different social media platforms
    • Search through your links with ease
    • Share your link collections with others
    • Embed social media posts from platforms like Twitter, Instagram, and more
    • Easy-to-use UI with animations for an enhanced user experience

    Tech Stack

    This project uses the following technologies:

    • Next.js: Framework for both frontend and backend
    • TypeScript: Programming language
    • Better Auth: Authentication library
    • Prisma: ORM for database interactions
    • PostgreSQL: Database management system
    • ShadCN: UI component library
    • Framer Motion: Animation library for smooth transitions
    • React Hot Toast: Toast notifications library
    • Zod: Schema and validation library
    • Zustand: State management library
    • TanStack Query: Data fetching and caching library
    • Axios: HTTP client for making requests
    • Tailwind CSS: Utility-first CSS framework
    • LogLib: Analytics tool for monitoring
    • Ludide React: Icon collection
    • React Icons: Icon collection
    • React Social Media Embed: Embedding library for social media posts
    • Use-Debounce: Debouncing library for improved performance

    Getting Started

    To get started with the project locally, follow these steps:

    Prerequisites

    • Node.js
    • PostgreSQL database
    • GitHub credentials for OAuth authentication

    Setup

    1. Fork the repository to your GitHub account.
    2. Clone the repository

      git clone https://github.com/your-username/linksy.git
      cd linksy

    Install dependencies

    npm install

    • Set up environment variables: Create a .env file at the root of the project and add the following variables:

    DATABASE_URL=your_postgresql_database_url

    GITHUB_CLIENT_ID=your_github_oauth_client_id

    GITHUB_CLIENT_SECRET=your_github_oauth_client_secret

    BETTER_AUTH_SECRET=your_better_auth_secret

    BETTER_AUTH_URL=your_better_auth_url

    Configure authentication client

    - Navigate to /lib/auth-client.ts
    - Change the baseURL to point to your local server (e.g., http://localhost:3000/).
    

    Run the development server

    `npm run dev`
    
    - The application will be available at http://localhost:3000.
    

    How to Contribute

    - Fork the repository to your GitHub account.
    - Clone your fork locally:
    

    git clone https://github.com/your-username/linksy.git

    Create a new branch for your feature or bug fix

    git checkout -b feature/your-feature-name

    Make your changes and commit them

    git add .

    git commit -m "Add your commit message"

    Push your changes to your fork

    `git push origin feature/your-feature-name`
    

    Create a pull request on GitHub from your branch to the main repository.

    Visit original content creator repository

  • simple-RESTful-API-Django

    simple-RESTful-API-Django

    A simple experience about RESTful API with Django.

    If you want to have an experience on RESTful API with Django. This is a full guide for you.First you need to know, I did this project using a youtube video on this link. Also, you can follow steps here and use the code in the repository.

    Start from here

    1. Download python from www.python.org
    2. Download Django web framework from www.djangoproject.com, or using pip (Python built-in package installer).
    3. Create the virtual environment using Terminal: $ python3 -m venv [name]
    4. Goninto the virtual environment using Terminal: $ source [name]/bin/activate
    5. Install Django using Terminal: $ pip install django
    6. Create a project using Terminal: $ django-admin startproject [name]
    7. Go into the folder using Terminal: $ cd [project-name]
    8. Run django using Terminal: $ python manage.py runserver
    9. Create an app in teh project folder using Terminal: $ django-admin startapp [name]
    10. Go to python_api/python_api/setting.py, add the app-name in INSTALLED_APPS. (e.g. ‘myapp’)
    11. In python_api/myapp/models.py add our model. you can copy from this repo.
    12. Convert the model that you made, into a SQL table using Terminal: $ python manage.py makemigrations
    13. Do the same convertation for the app too, using Terminal: $ python manage.py makemigrations myapp
    14. Finish the previous step by run this command on terminal: $ python manage.py migrate
    15. In python_api/python_api/urls.py add our api urls. you can copy from this repo.
    16. In python_api/myapp/views.py add our logic API and codes. you can copy from this repo.
    17. Now, you can test your APIs, just don’t forget to run this command before testing: $ python manage.py runserver

    Notes

    • At the end you will have two folder. the folder that hold our projects code titled as the name that choose for the project, and a folder as virtual environment to hold all of dependencies.
    • The project name in this repo is python_api
    • The app name in this repo is myapp

    Visit original content creator repository