Blog

  • DeepChess

    DeepChess: Learning to Play Chess with Minimal Lookahead and Deep Value Neural Networks

    This directory contains the basic scripts that are necessary to reproduce the results of my MSc. Artificial Intelligence Thesis entitled: Learning to Play Chess with Minimal Lookahead and Deep Value Neural Networks

    A scientific article about my work has been presented in January 2018 at the International Conference on Pattern Recognition Applications and Methods in Madeira, Portugal. The paper can be found here http://www.scitepress.org/PublicationsDetail.aspx?ID=xWk5QRREnQk=&t=1

    while it is possible to download my entire thesis from here: https://www.researchgate.net/publication/321028267_Learning_to_Play_Chess_with_Minimal_Lookahead_and_Deep_Value_Neural_Networks

    or here:

    http://www.ai.rug.nl/~mwiering/Thesis_Matthia_Sabatelli.pdf

    The basic steps that are required to be able to reproduce my results are the following:

    1. Parse a large set of chess games played by highly ranked players that you can download from here http://www.ficsgames.org/download.html with the pgnsplitter.py script
    2. Label the positions present in the individual games with the evaluation function of Stockfish and create appropriate board representations for the ANNs with the DatasetCreator.py script
    3. Create the different Datasets that are reported in my MSc Thesis with either the BitmapDataset.py script if you want to train a MLP or with the FeatureDataset.py script if you want to train a CNN
    4. Train the ANNs as shown in the ExampleCNN.py class. Check my MSc Thesis for the best set of hyperparameters
    5. Once you have trained the ANNs let them compete agains eachother as shown in ANNsCompetition.py

    Feel free to contact me for any questions or for the weights of the ANNs 🙂

    Visit original content creator repository
    https://github.com/paintception/DeepChess

  • woftool

    woftool

    woftool is a proof-of-concept utility that allows you to take a source file and store its WOF compressed version
    as a different file. Only the XPRESS algorithm is implemented, but you can choose from all of the supported block
    sizes (4K, 8K, 16K).

    woftool is also multithreaded and allows you to specify number of threads to use during the compression.

    Motivation

    Recently I wanted to download ~10TB+ of text data, however I didn’t have 10TB of spare storage hanging around. That led
    me to re-discovering FS compression. During the research I found out that since Windows 10 you can actually
    use so-called “WOF compression”, which offers multithreaded compression, higher compression ratio and multiple
    compression algorithms.

    WOF compression

    WOF (Windows Overlay Filter) compression is greatly described by Raymond Chen on his blog, but I’ll
    try to summarize some important points:

    • WOF compression is not a NTFS native file system compression
    • WOF compression is handled by a file system driver (wof.sys), that is usually loaded in regular default Windows
      installation
    • From NTFS point of view, WOF compressed files have these characteristics:
      • They are sparse files with no data
      • File size is set to the size of uncompressed data (but because they’re sparse with no data, they take no disk space)
      • They have :WofCompressedData Alternate Data Stream, which contains the actual compressed data
      • They have IO_REPARSE_TAG_WOF reparse point set
    • Decompression of WOF compressed files is handled transparently by the wof.sys driver – application doesn’t have to
      care if the file is compressed or not
    • However, if you try to write to the WOF compressed file, the file is transparently decompressed (and the compressed
      file is replaced with its decompressed version)
    • There is no option to mark folder as “WOF compressed” and expect that every written file there will be compressed

    From this information we can gather that the WOF compression is useful for files that aren’t modified.

    compact.exe

    Windows has already built-in utility for compressing files – compact.exe. It has been part of Windows for long time
    and before Windows 10 it could only enable/disable the standard NTFS compression.

    Starting with Windows 10, compact.exe has been extended and supports creating WOF compressed files. You can compress
    a file with this command:

    compact.exe /c /exe:lzx "file.bin"`
    

    … and decompress it with:

    compact.exe /u /exe:lzx "file.bin"`
    

    The /exe parameter has a bit misleading name – this parameter serves as a selector of the compression algorithm.
    You can chose from:

    • XPRESS4K (fastest) (default)
    • XPRESS8K
    • XPRESS16K
    • LZX (most compact)

    Note that when uncompressing WOF compressed file (/u), you need to specify the /exe parameter again, otherwise the
    compact.exe will try to reset the standard NTFS compression.

    Internals

    Internally, the compact.exe does nothing else than open the file and issue DeviceIoControl:

    struct
    {
      WOF_EXTERNAL_INFO WofInfo;
      FILE_PROVIDER_EXTERNAL_INFO_V1 FileInfo;
    } Buffer;
    
    Buffer.WofInfo.Version = WOF_CURRENT_VERSION;                   // 1
    Buffer.WofInfo.Provider = WOF_PROVIDER_FILE;                    // 2
    Buffer.FileInfo.Version = FILE_PROVIDER_CURRENT_VERSION;        // 1
    Buffer.FileInfo.Algorithm = FILE_PROVIDER_COMPRESSION_XPRESS4K;
    Buffer.FileInfo.Flags = 0;
    
    //
    // Valid Algorithm values:
    //
    // #define FILE_PROVIDER_COMPRESSION_XPRESS4K   (0x00000000)
    // #define FILE_PROVIDER_COMPRESSION_LZX        (0x00000001)
    // #define FILE_PROVIDER_COMPRESSION_XPRESS8K   (0x00000002)
    // #define FILE_PROVIDER_COMPRESSION_XPRESS16K  (0x00000003)
    //
    
    DeviceIoControl(FileHandle,
                    FSCTL_SET_EXTERNAL_BACKING,
                    &Buffer,
                    sizeof(Buffer),
                    NULL,
                    0,
                    &BytesReturned,
                    NULL);

    That’s it. This IOCTL will be captured by wof.sys, which does the heavy lifting.

    The actual content of the :WofCompressedData stream consists of 2 parts:

    • “Chunk table”
    • Actual compressed data

    The chunk table is simply an array of uint32_t elements and each item contains an offset to the next compressed chunk.
    One might ask – what if the compressed file is bigger than 4GB? The answer is – if the uncompressed file is bigger
    than 4GB, then the chunk table actually consists of uint64_t elements.

    The actual compressed data are simply concatenated compressed data blocks. If any compressed block size is higher than
    the uncompressed block, then the block is stored as uncompressed data.

    You can find more information on
    FSCTL_SET_EXTERNAL_BACKING,
    WOF_EXTERNAL_INFO and
    FILE_PROVIDER_EXTERNAL_INFO_V1
    on MSDN.

    Problem

    You might have spotted one limitation – there doesn’t exist a way how to take a source file and compress it into another
    file. Everything is done in-place.

    My specific use-case was to download the data and compress them onto USB-connected external hard drive (yes, the
    spinning one). However, it’s not possible to compress a file on one disk, and transfer such compressed file on
    another disk – it’ll get decompressed during the copy. The only option seemed to be to store all files on the external
    drive and continuously compress it there. However, it has obvious disadvantages – it’ll be painfully slow.

    One might ask – couldn’t you just use some kind of backup tool, that backs up files with all Alternate Data Streams?
    The answer is, unfortunately, no.

    The reason it’s not possible is that the wof.sys filter driver actually hides the :WofCompressedData
    stream – it’s not visible by any tool. Also, any attempt to directly create or open :WofCompressedData results in
    STATUS_ACCESS_DENIED.

    Solution

    What about the other way around? What if we tried to create :WofCompressedData stream and fill it ourselves?

    As I mentioned earlier, creation of :WofCompressedData is not possible. However, what is possible is to create
    stream with any other name, and then rename it to :WofCompressedData!

    But there is another obstacle – the WOF compressed file is also defined by the IO_REPARSE_TAG_WOF reparse point.
    You can set reparse point on a file by issuing FSCTL_SET_REPARSE_POINT on it.

    If you’d be guessing that wof.sys is filtering this IOCTL and returning STATUS_ACCESS_DENIED, you’d be actually
    right. But for some reason wof.sys doesn’t filter FSCTL_SET_REPARSE_POINT_EX IOCTL – and it is actually possible
    to create the reparse point this way.

    Usage

    woftool.exe <source> <destination> <algorithm> <threads>
    

    Valid values for <algorithm>:

    • xpress4k
    • xpress8k
    • xpress16k

    Examples:

    woftool.exe "source.txt" "destination.txt" xpress16k 1
    woftool.exe "C:\test.txt" "D:\test.txt" xpress8k 4
    

    Compilation

    Because Native API header files for the Process Hacker project is attached as a git submodule, you must not
    forget to fetch it:

    git clone --recurse-submodules https://github.com/wbenny/woftool

    After that, compile woftools using Visual Studio 2019. Solution file is included. No other dependencies are
    required.

    Implementation

    The WOF compression is handled by pair of wof.c/wof.h files, which depends only on ntdll.dll. Multithreading
    is handled by using the Tp thread-pool routines exported by the ntdll.dll.

    Remarks

    Please note that this is a proof-of-concept implementation and thus it’s possible that it may contain bugs.
    Do not take the validity of the created files as granted, as they may be corrupted. I take no responsibility for any
    data loss.

    Special thanks

    Special thanks goes to jonasLyk who nudged me into right way during my research and implementation.

    License

    This software is open-source under the MIT license. See the LICENSE.txt file in this repository.

    Dependencies are licensed by their own licenses.

    If you find this project interesting, you can buy me a coffee

      BTC 3GwZMNGvLCZMi7mjL8K6iyj6qGbhkVMNMF
      LTC MQn5YC7bZd4KSsaj8snSg4TetmdKDkeCYk
    

    Visit original content creator repository
    https://github.com/wbenny/woftool

  • textology


    os: windows mac linux python: 3.10+ python style: google imports: isort code style: black code style: pycodestyle doc style: pydocstyle static typing: mypy linting: pylint testing: pytest security: bandit license: MIT

    Textology

    The study of making interactive UIs with text.

    Why should GUIs have all the fun? Textology extends the amazing functionality of Textual and Rich, to help create TUIs with popular UI design patterns and principals from web and mobile frameworks. Refer to Top Features for a summary of the extensions provided.

    Additional Background

    Commonly known as Text (or Terminal) User Interfaces, the goal of a TUI (Tooey) is to provide as close as possible to a traditional GUI experience straight from a terminal. Why? Not all environments allow full graphical library access, web access, etc., but almost all provide access from a terminal. Yes, even terminals can provide mouse support, sophisticated layouts, animations, and a wide range of colors!

    Like Textual, Textology is inspired by modern web development. Textology extends Textual by bringing together, and expanding upon, designs from other frameworks such as Dash/FastAPI/Flask, including their use of routing, annotations, and observation patterns. Why? To increase developer velocity based on existing experience. Textology also receives inspiration from other UI frameworks external to Python, such as iOS, Android, and Web frameworks. Most importantly however, Textology is an extension of Textual: it does not replace Textual, but rather provides additional options on top of the core Textual framework.

    Before using Textology, be comfortable with Textual. For tutorials, guides, etc., refer to the Textual Documentation. Textology is NOT a replacement for Textual, it is an extension. Callbacks, widgets, screens, event lifecycles, etc., from Textual still apply to Textology extended widgets and applications. For other advanced features, familiarity with Dash/FastAPI/Flask principles will help. Examples for Textology extensions, such as callback based applications, are included in this documentation.

    Table Of Contents

    Top Features

    • Multiple theme support
      • Swap CSS themes live
      • Apply multiple CSS themes simultaneously
    • Extended callbacks
      • Declare Widget callbacks/event handling on instantiation or subclassing
      • Add Widget callbacks after instantiation
      • Use temporary callbacks that only trigger once
      • Declare Apps with “event driven architecture/observation pattern” to detect changes and automatically update UI
        • Listen to reactive attribute changes.
        • Listen to events/messages/errors.
    • Extended native widgets, including (but not limited to):
      • All widgets: ability to disable event messages and declare styles without subclassing
      • ListItems with data objects
      • Buttons with automatic tracking of click counts
    • New widgets, including (but not limited to):
      • ListItemHeaders (non-interactive ListItems)
      • HorizontalMenus (walkable list of ListViews with peeking at following lists)
      • MultiSelect (dropdown list with ability to select multiple items).
    • Enhanced testing support
      • Parallel tests via python-xdist
      • Custom testing arguments, such as updating snapshots on failures
      • Ability to quickly view results, expectations, and differences, via HTML reports

    Compatibility

    Textology follows Textual Compatibility guidelines with one exception: Python3.10 minimum requirement.

    Getting Started

    Installation

    Install Textology via pip:

    pip install textology

    For development of applications based on Textual/Textology (but not development of Textology itself), use the [dev] package. This installs extra Textual development tools, and requirements for Textology testing extensions.

    pip install textology[dev]

    For full development of Textology itself, refer to Contributing. This installs Textual development tools, requirements for Textology testing extensions, and full QA requirements to meet commit standards. This version has the highest library requirements, in order to match the versions used by Textology itself for testing. Required if developing Textology itself, or recommended if looking to match/exceed the level of QA testing used by Textology.

    Extended Widgets

    Native Textual widgets can be directly swapped out with Textology extended equivalents. They can then be used as is (standard Textual usage), or with extensions (via extra keyword arguments).

    • Basic swap (no extensions):
    # Replace:
    from textual.widgets import Button
    
    # With:
    from textology.widgets import Button
    • Instance callback extension (avoid global watchers and event chaining, repeat/temporary application, single/multiple)

      from textology.widgets import Button
      
      button = Button(
          callbacks={Button.Pressed: lambda event: print("Don't press my buttons...")},
      )
      • Callbacks can also be single fire (repeat false)
        from textology.widgets import Button
        
        button = Button(
            callbacks={(Button.Pressed, False): lambda event: print("Don't press my buttons...")},
        )
      • Callbacks can also be added via the handler name
        from textology.widgets import Button
        
        button = Button(
            callbacks={"on_button_pressed": lambda event: print("Don't press my buttons...")},
        )
      • Callbacks can also be added after instantiation
        from textology.widgets import Button
        
        button = Button()
        button.add_callback(Button.Pressed, lambda event: print("Don't press my buttons..."))
      • Callbacks can also be added for exceptions
        from textology.widgets import Button
        
        button = Button()
        button.add_callback(ValueError, lambda exception: print("This error had exceptional value..."))
    • Instance style extension (set styles directly at instantiation based on logic):

    from textology.widgets import Button
    
    feeling_blue = True
    
    button = Button(
        styles={
            "background": "blue" if feeling_blue else "green",
        },
    )
    • Instance message disable extension (avoid unused event chains, such as in large ListViews):
    from textual import events
    from textology.widgets import ListItem
    
    item = ListItem(
        disabled_messages=[events.Mount, events.Show],
    )

    Extended Applications

    Textology App classes, such as WidgetApp, can replace any regular Textual App, and be used as is without any extensions turned on. Here are examples of the most commonly used application subclasses, WidgetApp and ExtendedApp, and their primary extended functionality being used. More detailed examples of applications based around routes, callbacks, and standard Textual applications can be found in Examples.

    • Basic App without subclassing:

      from textology.apps import WidgetApp
      from textology.widgets import Button, Container, Label
      
      app = WidgetApp(
          Container(
              Button("Ping", callbacks={
                  Button.Pressed: lambda event: app.query_one('#label').update("Ping")
              }),
              Button("Pong", callbacks={
                  Button.Pressed: lambda event: app.query_one('#label').update("Pong")
              }),
              Button("Sing-a-long", callbacks={
                  Button.Pressed: lambda event: app.query_one('#label').update("Sing-a-long")
              }),
              Label(id="label")
          )
      )
      app.run()
    • Observer/callback application (automatic attribute monitoring and updates by element IDs without manual queries):

      from textology.apps import ExtendedApp
      from textology.observers import Modified, Select, Update
      from textology.widgets import Button, Container, Label
      
      app = ExtendedApp(
          child=Container(
              Button("Ping", id="ping-btn"),
              Button("Pong", id="pong-btn"),
              Button("Sing-a-long", id="sing-btn"),
              Container(
                  id="content",
              ),
          )
      )
      
      @app.when(
          Modified("ping-btn", "n_clicks"),
          Update("content", "children"),
      )
      def ping(clicks):
          return Label(f"Ping pressed {clicks}")
      
      @app.when(
          Modified("pong-btn", "n_clicks"),
          Update("content", "children"),
      )
      def pong(clicks):
          return Label(f"Pong pressed {clicks}")
      
      @app.when(
          Modified("sing-btn", "n_clicks"),
          Select("ping-btn", "n_clicks"),
          Select("pong-btn", "n_clicks"),
          Update("content", "children"),
      )
      def song(song_clicks, ping_clicks, pong_clicks):
          if not ping_clicks or not pong_clicks:
              return Label(f"Press Ping and Pong first to complete the song!")
          return Label(f"Ping, pong, sing-a-long song pressed {song_clicks}")
      
      app.run()
    • Callbacks can also be async:

      @app.when(
          Modified("pong-btn", "n_clicks"),
          Update("content", "children"),
      )
      async def delayed_pong(clicks):
          await asyncio.sleep(3)
          return Label(f"Pong pressed {clicks} and updated 3 seconds later")
    • Callbacks can also catch Exceptions from other callbacks:

      @app.when(
          Raised(Exception),
      )
      def error_notification(error):
          app.notify(f"An unknown error occurred: {error}", title="Error")
    • Callbacks can also listen for stateless events, not just stateful attribute updates
      from textology.apps import ExtendedApp
      from textology.observers import Published, Select, Update
      from textology.widgets import Button, Container, Label
      
      app = ExtendedApp(
          child=Container(
              Button("Ping", id="ping-btn"),
              Button("Pong", id="pong-btn"),
              Button("Sing-a-long", id="sing-btn"),
              Container(
                  id="content",
              ),
          )
      )
      
      @app.when(
          Published("ping-btn", Button.Pressed),
          Update("content", "children"),
      )
      def ping(event):
          return Label(f"Ping pressed {event.button.n_clicks}")
      
      @app.when(
          Published("pong-btn", Button.Pressed),
          Update("content", "children"),
      )
      def pong(event):
          return Label(f"Pong pressed {event.button.n_clicks}")
      
      @app.when(
          Published("sing-btn", Button.Pressed),
          Select("ping-btn", "n_clicks"),
          Select("pong-btn", "n_clicks"),
          Update("content", "children"),
      )
      def song(event, ping_clicks, pong_clicks):
          if not ping_clicks or not pong_clicks:
              return Label(f"Press Ping and Pong first to complete the song!")
          return Label(f"Ping, pong, sing-a-long song pressed {event.button.n_clicks}")
      
      app.run()
    • Callbacks can also be registered on methods, to share across all application instances
      from textology.apps import ExtendedApp
      from textology.observers import Published, Select, Update, when
      from textology.widgets import Button, Container, Label
      
      class Page(Container):
          def compose(self):
              yield Button("Ping", id="ping-btn")
              yield Button("Pong", id="pong-btn")
              yield Button("Sing-a-long", id="sing-btn")
              yield Container(
                  id="content",
              )
      
          @when(
              Published("ping-btn", Button.Pressed),
              Update("content", "children"),
          )
          def ping(self, event):
              return Label(f"Ping pressed {event.button.n_clicks}")
      
          @when(
              Published("pong-btn", Button.Pressed),
              Update("content", "children"),
          )
          def pong(self, event):
              return Label(f"Pong pressed {event.button.n_clicks}")
      
          @when(
              Published("sing-btn", Button.Pressed),
              Select("ping-btn", "n_clicks"),
              Select("pong-btn", "n_clicks"),
              Update("content", "children"),
          )
          def song(self, event, ping_clicks, pong_clicks):
              if not ping_clicks or not pong_clicks:
                  return Label(f"Press Ping and Pong first to complete the song!")
              return Label(f"Ping, pong, sing-a-long song pressed {event.button.n_clicks}")
      
      app = ExtendedApp(
          child=Page()
      )
      
      app.run()
    • Callbacks can also use Dash code style (Same as others, but with Dash compatibility object and calls)
      from textology.dash_compat import DashCompatApp, Input, Output, State
      from textology.widgets import Button, Container, Label
      
      app = DashCompatApp(
          layout=Container(
              Button("Ping", id="ping-btn"),
              Button("Pong", id="pong-btn"),
              Button("Sing-a-long", id="sing-btn"),
              Container(
                  id="content",
              ),
          )
      )
      
      @app.callback(
          Input("ping-btn", "n_clicks"),
          Output("content", "children"),
      )
      def ping(clicks):
          return Label(f"Ping pressed {clicks}")
      
      @app.callback(
          Input("pong-btn", "n_clicks"),
          Output("content", "children"),
      )
      def pong(clicks):
          return Label(f"Pong pressed {clicks}")
      
      @app.callback(
          Input("sing-btn", "n_clicks"),
          State("ping-btn", "n_clicks"),
          State("pong-btn", "n_clicks"),
          Output("content", "children"),
      )
      def song(song_clicks, ping_clicks, pong_clicks):
          if not ping_clicks or not pong_clicks:
              return Label(f"Press Ping and Pong first to complete the song!")
          return Label(f"Ping, pong, sing-a-long song pressed {song_clicks}")
      
      app.run()

    Extended Testing

    Don’t want to serialize your pytests? Looking for the ability to quickly visualize differences when UIs change? You came to the right place. Textology extends Textual SVG snapshot capabilities to add support for parallel processing during tests (python-xdist), and custom options such as auto updating SVG snapshots on failures. In order to use the pytest extensions automagically, add the following to a conftest.py in the root of the project. This will enable usage of the compare_snapshots fixture, and HTML report generation on failure, automatically.

    pytest_plugins = ("textology.pytest_utils",)
    • Basic snapshot test:
      import pytest
      from textual import App
      from textual.widgets import Button
      
      class BasicApp(App):
          def compose(self):
              yield Button("Click me!")
      
      @pytest.mark.asyncio
      async def test_snapshot_with_app(compare_snapshots):
          assert await compare_snapshots(BasicApp())

    Other advanced testing features include:

    • Ability to pass an App, App Pilot, or a module containing an instantiated App or Pilot, to fixtures
    • Custom snapshot paths, including reusing the same snapshot across multiple tests
    • Automatic SVG updates with pytest --txtology-snap-update

    View all options by running pytest -h and referring to Custom options: section.

    Visit original content creator repository https://github.com/pyranha-labs/textology
  • FakeTlsTunnel

    توصیه میشه از تونل معکوس استفاده کنید

    با توجه به اینکه تونل معکوس منتشر شده و باهاش نتیجه ی خیلی خوبی گرفتم پیشناهاد میکنم اول اونو امتحان کنید و اگه براتون واقعا جواب نداد بیاید سراغ این پروژه

    لینک

    مقدمه کلی

    با تشکر از دوستمون AminiYT بابت آموزش ویدویی ؛ سگارو، وحید عزیز و تمام دوستانی که برای دسترسی آزاد به اینترنت زحمت میکشند

    با اجرای دوباره دستور نصب برنامه ؛ آپدیت کنید.

    برای اجرا به صورت مالتی پورت اینجا را مطالعه کنید.

    برای اجرا بر روی روتر اینجا را مطالعه کنید

    این برنامه به طور کلی برای عبور ترافیک با دامنه(sni) دلخواه برای تونل و کانفیگ های tcp base مثل open vpn-trojan-vmess-vless انجام شده برای دوستانی که تونل کار میکنند ؛ یه نسخه از این برنامه روی سرور تونل باید اجرا کنید و یه نسخه دیگه هم روی سرور خارج اتون ؛ با این روش میتونید صرف نظر از نوع کانفیگ و مشخصاتش tls handshake رو با دامنه کاملا دلخواه تکمیل کنید و بدون کوچیک ترین تغییری روی کانفیگ های دست کاربران.

    انجام این عمل باعث میشه اولا کانکشن بیکیفیت و اپلود پایین روی تونل برطرف بشه

    دوم اینکه باعث میشه ایپی سرور خارج بعد یه مدت بلاک نشه (خودم روی دیتا سنتر زیرساخت که خیلی ایپی بلاک میکرد هستم و ۲ هفته ای هست دیگه ایپی بلاک نکرده با این روش و همچنین یکی دیگه از دوستان هم روی یه دیتا سنتر دیگه تست کردن این موضوع را) البته همچنان ممکنه که ایپی توسط دیتا سنتر با توجه به یک به یک بودن نسبت ترافیک دستی بلاک بشه.

    این برنامه رو همچنین میتویند روی کانفیگ های مستقیم هم اعمال کنید اگه نسخه تونل اش رو روی سیستم شخصیتون ران کنید و به عنوان پروکسی ازش استفاده کنید. که البته هدف اصلی من از نوشتنش برای ران شدن در تونل بوده ولی کلی این نکته رو هم گفتم.

    این برنامه یه ترکیب از ایده تغییر sni پروتوکل reality و ایده های شخصی هست ؛ که اگر وقت پیدا کنم قراره اپدیت هایی مثل load balancing هم بهش اضافه کنم

    این برنامه دربرابر Active Detection فایر وال مقاوم شده و تست شده است.
    با توجه به این که سرتیفکیت ها کاملا verify میشوند ؛ حمله Man in the middle برای این روش بی تاثیر خواهد بود.

    نحوه ستاپ برای تونل

    اول دقت کنید که یه نسخه از این برنامه روی سرور ایران باید ران شه و یه نسخه روی سرور خارج.

    توضیحات را یک بار بخوانید و بعد اجرا کنید ؛ همچنین پیشناهاد میکنم که برنامه اول روی سرورخارج اجرا بشه بعد سرور ایران.

    روش اول ( نصب با اسکریپت ):

    bash <(curl -fsSL https://raw.githubusercontent.com/radkesvat/FakeTlsTunnel/master/FtTunnel.sh)
    

    2

    روش دوم:

    سرور ایران

    قبل از هر کاری باید نرم افزار تونل قبلی خودتونو off کنید. سپس با این دستور فایل اجرایی برنامه رو دانلود کنید

    wget  "https://raw.githubusercontent.com/radkesvat/FakeTlsTunnel/master/install.sh" -O install.sh && chmod +x install.sh && bash install.sh 

    حالا میتونید برنامه رو به این شکل اجرا کنید

    ./FTT --tunnel --lport:443 --toip:88.104.1.1  --toport:443 --sni:github.com --password:123ab

    –lport :

    میگه که روی چه پورتی ران بشه ؛ این پورت مثلا میتونه 443 باشه یا پورت های دیگه. و همون پورتی هست که کانفیگ دست کاربر بهش اشاره میکنه ؛ برنامه قابلیت مالتی پورت هم داره که میتونید بگید یه بازه پورت از پورت فلان تا فلان را بپذیره و فوراد بشه روی همون پورت روی سرور خارج که به قابلیت مالتی پورت معروفه ؛ توضحاتش توی یه پیج جدا دادم که میتونید مطالعه کنید

    –toip:

    آیپی سرور خارجتون هست ؛ فقط ۱ ایپی اینجا بزارید اما تو اپدیت های بعدی حالت لود بالانسینگ هم اضافه میشه

    –toport:

    پورت سرور خارجتونه ؛ این پورت خیلی خیلی مهمه که ۴۴۳ باشه ؛ مهم نیست که پنل اتون روی کدوم پورت داره کار میکنه ؛ پنل باید روی یه پورتی به جز 443 ران باشه اما وقتی ما نسخه دوم این برنامه رو که روی سرور خارج امون اجرا کردیم ؛ باید اونجا بگیم که پورت ۴۴۳ رو گوش بده ؛ اگه پورتی به جز ۴۴۳ استفاده کنید احتمال فیلتر شدن بالا هست بعدا در این مورد وقتی سرور خارج رو ستاپ میکنیم بیشتر توضیح میدم

    –sni:

    این همون دامنه ایی هست که میخواهیم باهاش handshake رو کامل کنیم ؛ برخلاف reality شما لازم نیست سایت رو اسکن کنید ؛ هر دامنه ای که توی مرورگرتون باز میشه رو میتونید اینجا وارد کنید حتی مثلا google.com هردامنه ای که اینجا وارد کردین؛ دقیقا همینو باید سمت سرور هم وارد کنید وگر نه کار نمیکنه.

    –password:

    این رمز برای رمزنگاری دیتا بین تونل و سرور مقصد استفاده میشه ؛ هرچی دوست دارید وارد کنید اصلا مهم نیست حتی 123 چون ما از hash این پسورد استفاد میکنیم هر پسوردی که اینجا وارد کردین؛ دقیقا همینو باید سمت سرور هم وارد کنید وگر نه کار نمیکنه.

    خوب توضیحات سرور تونل ایران گفته شد.

    سرور خارج

    اولین نکته اینه که پورت ۴۴۳ رو باید خالی کنید ؛ پنل یا nginx و کلا هرچی دارید از این پورت بردارید بزارید یه پورت دیگه.

    برنامه رو اول دانلود کنید

    wget  "https://raw.githubusercontent.com/radkesvat/FakeTlsTunnel/master/install.sh" -O install.sh && chmod +x install.sh && bash install.sh

    حالا میتونیم به این شکل اجرا کنیم

    ./FTT --server --lport:443 --toip:127.0.0.1 --toport:443 --sni:github.com --password:123ab

    –lport :

    این پورت که گفتم برای امنیت بیشتر باید ۴۴۳ باشه ؛ همون پورتی هست که توی کامند سرور ایران بود –toport پس اگه اونجا چیزی به جز 443 گزاشتین ؛ اینجا هم همینو بزارید

    –toip:

    اینجا میگیم که پکت ها به کودوم ایپی فرستاده بشن ؛ که چون روی سرور خارج هستیم میگیم به همین سرور ارسال بشن
    پس مقدارش باید همین 127.0.0.1 باشه

    –toport:

    این پارامتر باید پورتی باشه که پنل کانفیگتون رو روش آماده کرده ؛ دقت کنید پورت وبسایت پنل یا nginx نیست. پورت خود کانفیگ هست مثلا اگه یوزر رو روی پورت 2000 ساختین اینجا هم باید 2000 وارد بشه نکته مهم اینه که اکثرا کانفیگ هاشونو روی پورت ۴۴۳ میسازن ؛ شما باید در پنل اتون این پورت رو یه چیزی به جز ۴۴۳ بزارین چون ۴۴۳ رو خود برنامه تونل میخواد بگیره اما نیازی نیست که کانفیگی که به کاربر دادید رو پورت اش رو تغییر بدین ؛ برای اون همون ۴۴۳ باید باقی بمونه ؛ برنامه تونل پکت هارو روی همون ۴۴۳ باید بگیره و میفرسته به این پورت امیدوارم متوجه منظورم شده باشید ؛‌ یه مثال کامل اخر کار میزنم اونم بخونید

    –sni

    دقیقا همون چیزی که روی سرور ایران گزاشتید

    –password

    دقیقا همون پسوری که روی سرور ایران گزاشتید

    یه مثال درست

    ایپی سرور خارجم : 88.1.2.3

    ایپی سرور ایرانم : 5.4.3.2

    پنل من روی پورت 443 کانفیگ های تروجان میساخت و میدادم به کاربر

    حالا اول توی پنل میام پورت 443 رو تغییر میدم به 2083

    بعد روی سرور ایران این دستورو ران میکنم

    ./FTT --tunnel --lport:443 --toip:88.1.2.3  --toport:443 --sni:github.com --password:123ab

    و روی سرور خارج ام هم این دستور را وارد میکنم

    ./FTT --server --lport:443 --toip:127.0.0.1 --toport:2083 --sni:github.com --password:123ab

    بدون اینکه کانفیگ دست کاربر روش تغییری اجرا بشه ؛ ستاپ تونل ما تکمیل شد.

    نکته مهم در مورد sni

    دقت کنید github.com فیلتره استفاده نکنید فقط برای مثال گذاشتم

    پارامتر sni و مقدارش خیلی مهمه ؛ به جرعت میتونم بگم اگه از ستاپتون مطمعن هستید و دیتا منتقل نمیشه مشکل همین دامنه هست ؛ با کلی دامنه تست میکردم روی آسیاتک و دیتا رد نمیشد اما یه دامنه سایت خیلی معروف که همتون ۱۰۰ درصد میشناسید زدم و کار کرد با پینگ و جیتر خیلی عالی

    برای انتخاب دامنه ؛ پیشناهاد میکنم دامنه هایی که سرورشون توی ایرانه استفاده نشه و دامنه های معروف خارجی یا کلا هر دامنه خارجی ایی که بدون فیلتر شکن میتونید واردش بشید ؛‌ میتونه دامنه خوبی باشه تست کنید خلاصه

    یه sni تمیز میزارم اینجا امیدوارم وقتی میخونید فیلتر نشده باشه data.services.jetbrains.com

    اگه وصل نشد ؟

    اگه برنامه ارور داد از روی متن ارور اش میتونید تا حد خوبی بفهمید که چه مشکلی هست

    ssl connect error , connection error , no connection , etc..

    این یعنی ارتباط با سرور خارج مشکل داره ؛ دلایل

    اول چک کنید که مقادیر مثل ایپی و پورت درست وارد شده باشه. بعد اگه وصل نشد

    یا ایپی بلاک هست که (معمولا پینگ تمیز نمیده ایپی بلاک ولی نه همیشه !)

    یا sni تمیز نیست ؛ هستن sni هایی که اگه بزنید اصلا اجازه ارتباط نمیدن چون اون sni کامل بلاک شده

    Address already in use

    وقتی که پورتی که –lport زدین توسط یه برنامه دیگه گرفته باشه ؛ باید پورت را آزاد کنید

    سرعت و کیفیت کانکشن خوب نیست

    اگه وصل میشید ولی سرعت ؛ پینگ ؛ جیتر خوب نیست حتما با تغییر sni تست بگیرید و به احتمال خیلی زیاد با یه sni بهتر کانکش عادی میشه

    برنامه بعد از بستن ssh بسته میشه

    برای اینکار باید برنامه رو یا با screen یا با nohup اجرا کنید

    مثلا برای اینکه این دستور بعد از بستن ssh باقی بمونه

    ./FTT --tunnel --lport:443 --toip:88.1.2.3  --toport:443 --sni:github.com --password:123ab

    باید اینجوری اجراش کنید

    nohup ./FTT --tunnel --lport:443 --toip:88.1.2.3  --toport:443 --sni:github.com --password:123ab &

    هم سرور خارج و هم ایران اینطوری اجرا کنید تا باقی بمونه و وقتی اینطوری اجرا کنید برای بستن اش باید این دستورو بزنید

    pkill FTT

    برنامه کار میکرد ولی بعد از فلان روز استوپ شد

    اگه اینطوری شد حتما issue بزنید تا دلیلشو پیدا کنیم. و اینکه تونل رو یه بار استوپ و دوباره اجرا کنید تا کار کنه ؛ دستور استوپ کردنش هم بالا گفتم

    sni تمیز و کثیف یعنی چی؟

    با توجه به تست هایی که زدم ؛ تقریبا مطمعن هستم که برای اینکه جلوی پروتکل reality گرفته بشه اومدن و هر دامنه را map کردن به یه ایپی خاص و اینطوری فقط همون ایپی با همون دامنه میتونه وصل بشه ؛ این کار البته به شدت پردازش نیاز داره و همچنین ممکن نیست تمام sni ها رو اینکارو کرد چون خیلی زیاد هستن . برای همین هست که خیلی از sni ها وصل نمیشن ولی در عین حال خیلی از sni ها هم هستن که باقی موندن و میشه ازشون استفاده کرد.

    حمایت

    اگه خواستین میتونید با این لینک ترون ؛‌ دونیت کنین

    THMbaTeDjeEygjVZQu91k9cqVw1v4TsdAk

    اینترنت یا برای همه یا برای هیچکس!

    به امید آزادی

    Visit original content creator repository https://github.com/radkesvat/FakeTlsTunnel
  • AlexaBigQuizMaster

    AlexaBigQuizMaster

    Alexa BIG Quiz master is an app created to use a set of questions and answers in quiz format to test interaction with sight impaired students in schools.

    The structure of the quiz is multiple choice so it can easily implement button clicks and patterns to interact with the system, and does not depend on a relatively easier True/False system.

    Flow of the program

    Flow of the program involves:

    1. Creating the quiz format using the questions_read_write.py file.
    2. Sending / Uploading the file to AWS Lambda including the questionAnswerData.json file.
    3. Using the front end interaction model to interact with the app on Alexa.

    For Step 1:

    The program shall look for a file called questionAnswerData.json and take in the current JSON format into memory (to make it easier to write out in a structured format). The program will then ask the users for a question and the THREE possible answers, in the order a->b->c.

    This is then stored with the a,b,c equivalent into the system and saved into the file as a JSON structure defined in the file itself.

    Alexa App Interaction

    Invocation:

    To use the app, the user must say Alexa, Ask Big Quiz ... where ... is the command.

    To get a list of commands or how to interact with the app, the user can say Alexa, Ask Big Quiz for help.

    The list of commands are as follows:

    1. Ask Big Quiz for Help : The app explains what it’s made for and how to interact with it.
    2. Ask Big Quiz to ask me a question: The quiz master will ask you a question from the JSON file, usually in order (not random) and expect an answer back either instantly or using the next command
    3. Ask Big Quiz is the answer <a,b,c>: The options a,b,c will correspond to the question, Must be aware that you cannot reply with the answer, it has to be the option. For example, if the options are Hitler, Germany or War, you must reply A for Hitler, B for Germany or C for war instead of replying Hitler, Germany or War.
    4. Ask Big Quiz to reset the quiz: This is to reset the quiz incase something has gone wrong or to retest interactions. This resets all variables in the backend to their initial values.

    Flow:

    The flow of the program is relatively plain, the program shall wait for an instruction for a question / starting quiz / resetting quiz to initialise the variables.

    There is a check in the program that checks to see if the quiz has been initialised (as in the program has read in the Questions and Answers from the JSON file) before the program can ask questions. If the user asks for a question before initialisation, then the program initialises and expects the user to repeat the request for a question

    Extension / Bugs:

    1. A much needed extension to the app would be the ability to get Alexa to repeat the current question, and possibly make it slower this time.

    2. Send the questions through an Alexa Speech Helper as available in the repository on speaking to make them more understandable for the users as right now alexa seems to ignore grammar in the questions.

    3. Allow users to check current score at any point / include it in the questions being asked.

    4. BUG: The user can say A but alexa might hear it as Hey and consider the answer incorrect, a solution for this is to simply add a check that says if (~ any form of hey/Hey/HEY ~) : answer = A.

    5. BUG: At some point the program might lose track of how many questions have been done, and might repeat questions, a simple system to keep track of this is to use the JSON file to have a tag that says number of questions completed. Make sure to reset this whenever the program ends.

    This can be done using the EndOfSessionIntent (official name unknown) that exists in Alexa’s requests.

    About / Support

    Project was made in association with the Bristol Interaction Group, as part of a research project looking into using smart devices such as Alexa to create a more inclusive learning system for sight impaired students in classes. This app in particular was created as a base to test interaction for MCQ based quizes, where button inputs and other non vocal inputs could be tested.

    For support, contact Sunny Miglani.

    Visit original content creator repository
    https://github.com/sunnyMiglani/AlexaBigQuizMaster

  • stoffmagasin.no

    Draft:

    Here’s 7000+ lines of code for ya.

    • Google Pagespeed score: 96% (https://gtmetrix.com/reports/www.stoffmagasin.no/mbJiHxdM)

    • Loads in less than 0.4 seconds: https://tools.pingdom.com/#!/dzAA6p/https://www.stoffmagasin.no/

    • Code is messy, to be fixed. I solve this by washing my hands after a coding session. No but really, I was learning by doing, and the deadline was always yesterday.

    • Early in the project, I messed up a media query; I forgot a closing }, so new media queries would not work as they should. I didn’t figure this out until the very end, that’s why there’s separate code for mobile/desktop versions, solved with display:hidden for its respective device. For such short code, I would argue it does not matter much for performance.

    • I’ve used a lot of inline CSS. Mostly because it’s easier in a try-fail, learning by doing approach, but also because it can be good for performance. However, a lot of the inline CSS is duplicated and should be put in style.css.

    Some cool features:

    • I started with 500 articles where elements like journalist, photographer, intro, byline etc was formatted differently and not put in variables. A few jQuery script fetches the text inside tags. For example, articles introduction is inside h4 tags. The script removes the old formatting and uses the new. I just got an idea that the script could take the relevant text inside the tag and save it in a Advanced Custom Fields (plugin) field, and that way it will be stored in a variable. To be continued…
    • I fixed the featured image layout by creating a PHP script that calculates the aspect ratio for the featured image: if the image has a aspect ratio of x, it will use the most fitting layout template. Because this calculation requires that the full image needs to be downloaded, the script was run one time, and new articles needs to set this layout manually. Reason is performance.
    • By using the WordPress API, I created my own settings panel. This theme needs one variable to be updated, the rest is automatic. The reason for this is it gives more control for the webmaster. The current issue number is updated when all the new articles are uploaded. If this was done automatically, the website would look very empty while the new articles were in the works.

    Visit original content creator repository
    https://github.com/Marcusln/stoffmagasin.no

  • baking-lyrics

    bakinglyrics.com

    This is an open source, side project created from the bottom of the world. (Bottom to be read as arse). Here is a map. But is for everyone. Our humble contribution to those who make our lives wonderful freeing us from our own thoughts (for just a few minutes)

    We wish you the best so play as much as you want, release your creativity, innovate, copy & paste.

    Best regards,

    The Baking-Lyrics coding band.

    www.bakinglyrics.com

    Documentation Status

    Build Status

    codecov


    Intro

    Baking-Lyrics works based on a group of Machine and Deep learning models that generates music lyrics and titles automatically. Currently available in English (soon Spanish as well).

    Baking Lyrics was developed by a team of music; machine learning and software development enthusiasts all the way from Buenos Aires, Argentina. Our country is well know for its rock scene so we were tempted on using a 100% rock corpus but our metal loving friends convinced us of accepting other genres.

    Baking Lyrics: An automatic lyrics generator

    Be a rock star using machine learning to generate lyrics! Baking lyrics is an automatic generator of lyrics based on a natural language model that is trained using the largest database of lyrics online. The vast corpus contains all the lyrics of the most popular bands and singers of our time. This corpus was used to train a language model that reproduces the style of each band or singer. If you ever wanted to reproduce the talent of your favorite songwriter, now is the time!

    How-to

    • Clone the repo.
    • Create a python venv in your SO
    • Source it: source bar/foo/venv/your-venv/bin/activate
    • Run pip install -r requirements/dev.txt
    • Request the current models to Andrei Rukavina
    • Request the songsdata.csv to Andrei Rukavina
    • Put the file under: /api/resources/models/ and /api/resources/ respectively
    • Add the following ENV variable into your favourite OS: APP_CONFIG_FILE=/Users/<your name>/GitHub/Baking-Lyrics/config/development.py
    • Add the following ENV variable into your favourite OS: PYTHONPATH=/Users/arukavina/github/baking-lyrics
    • Run cd /api
    • Run refresh_database.py
    • Run manage.py run

    Models

    While developing the app we tried many different models and approaches.

    Deep-Learning models

    Text summarization is a problem in natural language processing of creating a short, accurate, and fluent summary of a source document.

    The Encoder-Decoder recurrent neural network architecture developed for machine translation has proven effective when applied to the problem of text summarization.

    It can be difficult to apply this architecture in the Keras deep learning library, given some of the flexibility sacrificed to make the library clean, simple, and easy to use.

    Skip-Thought Vectors

    From arXiv:1506.06726v1 (June 22nd 2015) By: Ryan Kiros, Yukun Zhu, et. al.

    This is Baking-Lyrics current model

    The authors considered the following question: ‘is there a task and a corresponding loss that will allow us to learn highly generic sentence representations?’

    they gave evidence for this by proposing a model for learning high-quality sentence vectors without a particular supervised task in mind. Using word vector learning as inspiration, they proposed an objective function that abstracts the skip-gram model of [8] to the sentence level.
    That is, instead of using a word to predict its surrounding context, they instead encode a sentence to predict the sentences around it.
    Thus, any composition operator can be substituted as a sentence encoder and only the objective function becomes modified.

    The following figure illustrates the model:

    model

    They called their model: skip-thoughts and vectors induced by our model are called skip-thought vectors.

    Encoder-Decoder

    Encoder. Let w1i,…,wNi be the words in sentences i where N is the number of words in the sentence. At each time step, the encoder produces a hidden state ht/i which can be interpreted as the representation of the sequence w1i,…,wti. The hidden state hNi thus represents the full sentence. To encode a sentence, we iterate the following sequence of equations (dropping the subscript i):

    model

    where ̄ht is the proposed state update at time-t,z-t is the update gate,rt is the reset gate () denotes a component-wise product. oth update gates takes values between zero and one.Decoder.

    The decoder is a neural language model which conditions on the encoder output hi. The computation is similar to that of the encoder except we introduce matrices Cz,Cr and C that are used to bias the update gate, reset gate and hidden state computation by the sentence vector. One decoder is used for the next sentences i+1 while a second decoder is used for the previous sentences i−1. Separate parameters are used for each decoder with the exception of the vocabulary matrixV, which is the weight matrix connecting the decoder’s hidden state for computing a distribution over words.

    In what follows we describe the decoder for the next sentences i+1 although an analogous computation is used for the previous sentences i−1. Let hti+1 denote the hidden state of the decoder at time-t. Decoding involves iterating through the following sequence of equations (dropping the subscript i+ 1):

    model

    Givenhti+1, the probability of wordwti+1given the previoust−1words and the encoder vector is

    model

    where v w,t i+1 denotes the row ofV corresponding to the word of w t i+1. An analogous computation is performed for the previous sentences i−1.

    Objective. Given a tuple (si−1,si,si+1), the objective optimized is the sum of the log-probabilities for the forward and backward sentences conditioned on the encoder representation

    model

    Encoder-Decoder Architecture

    Based on: machinelearningmastery.com

    The Encoder-Decoder architecture is a way of organizing recurrent neural networks for sequence prediction problems that have a variable number of inputs, outputs, or both inputs and outputs.

    The architecture involves two components: an encoder and a decoder.

    • Encoder: The encoder reads the entire input sequence and encodes it into an internal representation, often a fixed-length vector called the context vector.
    • Decoder: The decoder reads the encoded input sequence from the encoder and generates the output sequence.

    For more about the Encoder-Decoder architecture, see the post:

    • Encoder-Decoder Long Short-Term Memory Networks

    Both the encoder and the decoder submodels are trained jointly, meaning at the same time.

    This is quite a feat as traditionally, challenging natural language problems required the development of separate models that were later strung into a pipeline, allowing error to accumulate during the sequence generation process.

    The entire encoded input is used as context for generating each step in the output. Although this works, the fixed-length encoding of the input limits the length of output sequences that can be generated.

    An extension of the Encoder-Decoder architecture is to provide a more expressive form of the encoded input sequence and allow the decoder to learn where to pay attention to the encoded input when generating each step of the output sequence.

    This extension of the architecture is called attention. The Encoder-Decoder architecture with attention is popular for a suite of natural language processing problems that generate variable length output sequences, such as text summarization. The application of architecture to text summarization is as follows:

    • Encoder: The encoder is responsible for reading the source document and encoding it to an internal representation.
    • Decoder: The decoder is a language model responsible for generating each word in the output summary using the encoded representation of the source document.

    N-Gram models

    N-gram models are probabilistic models that assign probabilities on the “next” word in a sequence, given the n-1 previous words. This algorithm takes in an array of Strings (the songs in our corpus), and uses punctuation to select beginning and end tokens on each sentence. Baking lyrics uses a trigram model, since it calculates the frecuencies in which every three-word combination appear on each band’s corpus, and extrapolates the probabilities from there.

    FAQ

    There is always something that breaks

    TensorFlow

    If needed, different wheels (*.whl) for TF could be found here: TensorFlow. Use it to upgrade the requirements file accordingly.

    Environment Variables

    If using flask run

    flask run

    1. PROD: PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/production.py’);FLASK_ENV=production;FLASK_RUN_PORT=8003
    2. TEST: PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/testing.py’);FLASK_ENV=testing;FLASK_RUN_PORT=8001
    3. DEV:PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/development.py’);FLASK_RUN_PORT=8000;FLASK_DEBUG=0;FLASK_ENV=development. Feel free to enable debug mode in DEV

    If using manage run

    python manage.py run

    1. TEST: PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/testing.py
    2. DEV: PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/development.py Feel free to enable debug mode in DEV

    To run Tests:

    python manage.py test

    • PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/testing.py
    Visit original content creator repository https://github.com/arukavina/baking-lyrics
  • baking-lyrics

    bakinglyrics.com

    This is an open source, side project created from the bottom of the world. (Bottom to be read as arse). Here is a map. But is for everyone. Our humble contribution to those who make our lives wonderful freeing us from our own thoughts (for just a few minutes)

    We wish you the best so play as much as you want, release your creativity, innovate, copy & paste.

    Best regards,

    The Baking-Lyrics coding band.

    www.bakinglyrics.com

    Documentation Status

    Build Status

    codecov


    Intro

    Baking-Lyrics works based on a group of Machine and Deep learning models that generates music lyrics and titles automatically. Currently available in English (soon Spanish as well).

    Baking Lyrics was developed by a team of music; machine learning and software development enthusiasts all the way from Buenos Aires, Argentina. Our country is well know for its rock scene so we were tempted on using a 100% rock corpus but our metal loving friends convinced us of accepting other genres.

    Baking Lyrics: An automatic lyrics generator

    Be a rock star using machine learning to generate lyrics! Baking lyrics is an automatic generator of lyrics based on a natural language model that is trained using the largest database of lyrics online. The vast corpus contains all the lyrics of the most popular bands and singers of our time. This corpus was used to train a language model that reproduces the style of each band or singer. If you ever wanted to reproduce the talent of your favorite songwriter, now is the time!

    How-to

    • Clone the repo.
    • Create a python venv in your SO
    • Source it: source bar/foo/venv/your-venv/bin/activate
    • Run pip install -r requirements/dev.txt
    • Request the current models to Andrei Rukavina
    • Request the songsdata.csv to Andrei Rukavina
    • Put the file under: /api/resources/models/ and /api/resources/ respectively
    • Add the following ENV variable into your favourite OS: APP_CONFIG_FILE=/Users/<your name>/GitHub/Baking-Lyrics/config/development.py
    • Add the following ENV variable into your favourite OS: PYTHONPATH=/Users/arukavina/github/baking-lyrics
    • Run cd /api
    • Run refresh_database.py
    • Run manage.py run

    Models

    While developing the app we tried many different models and approaches.

    Deep-Learning models

    Text summarization is a problem in natural language processing of creating a short, accurate, and fluent summary of a source document.

    The Encoder-Decoder recurrent neural network architecture developed for machine translation has proven effective when applied to the problem of text summarization.

    It can be difficult to apply this architecture in the Keras deep learning library, given some of the flexibility sacrificed to make the library clean, simple, and easy to use.

    Skip-Thought Vectors

    From arXiv:1506.06726v1 (June 22nd 2015) By: Ryan Kiros, Yukun Zhu, et. al.

    This is Baking-Lyrics current model

    The authors considered the following question: ‘is there a task and a corresponding loss that will allow us to learn highly generic sentence representations?’

    they gave evidence for this by proposing a model for learning high-quality sentence vectors without a particular supervised task in mind. Using word vector learning as inspiration, they proposed an objective function that abstracts the skip-gram model of [8] to the sentence level.
    That is, instead of using a word to predict its surrounding context, they instead encode a sentence to predict the sentences around it.
    Thus, any composition operator can be substituted as a sentence encoder and only the objective function becomes modified.

    The following figure illustrates the model:

    model

    They called their model: skip-thoughts and vectors induced by our model are called skip-thought vectors.

    Encoder-Decoder

    Encoder. Let w1i,…,wNi be the words in sentences i where N is the number of words in the sentence. At each time step, the encoder produces a hidden state ht/i which can be interpreted as the representation of the sequence w1i,…,wti. The hidden state hNi thus represents the full sentence. To encode a sentence, we iterate the following sequence of equations (dropping the subscript i):

    model

    where ̄ht is the proposed state update at time-t,z-t is the update gate,rt is the reset gate () denotes a component-wise product. oth update gates takes values between zero and one.Decoder.

    The decoder is a neural language model which conditions on the encoder output hi. The computation is similar to that of the encoder except we introduce matrices Cz,Cr and C that are used to bias the update gate, reset gate and hidden state computation by the sentence vector. One decoder is used for the next sentences i+1 while a second decoder is used for the previous sentences i−1. Separate parameters are used for each decoder with the exception of the vocabulary matrixV, which is the weight matrix connecting the decoder’s hidden state for computing a distribution over words.

    In what follows we describe the decoder for the next sentences i+1 although an analogous computation is used for the previous sentences i−1. Let hti+1 denote the hidden state of the decoder at time-t. Decoding involves iterating through the following sequence of equations (dropping the subscript i+ 1):

    model

    Givenhti+1, the probability of wordwti+1given the previoust−1words and the encoder vector is

    model

    where v w,t i+1 denotes the row ofV corresponding to the word of w t i+1. An analogous computation is performed for the previous sentences i−1.

    Objective. Given a tuple (si−1,si,si+1), the objective optimized is the sum of the log-probabilities for the forward and backward sentences conditioned on the encoder representation

    model

    Encoder-Decoder Architecture

    Based on: machinelearningmastery.com

    The Encoder-Decoder architecture is a way of organizing recurrent neural networks for sequence prediction problems that have a variable number of inputs, outputs, or both inputs and outputs.

    The architecture involves two components: an encoder and a decoder.

    • Encoder: The encoder reads the entire input sequence and encodes it into an internal representation, often a fixed-length vector called the context vector.
    • Decoder: The decoder reads the encoded input sequence from the encoder and generates the output sequence.

    For more about the Encoder-Decoder architecture, see the post:

    • Encoder-Decoder Long Short-Term Memory Networks

    Both the encoder and the decoder submodels are trained jointly, meaning at the same time.

    This is quite a feat as traditionally, challenging natural language problems required the development of separate models that were later strung into a pipeline, allowing error to accumulate during the sequence generation process.

    The entire encoded input is used as context for generating each step in the output. Although this works, the fixed-length encoding of the input limits the length of output sequences that can be generated.

    An extension of the Encoder-Decoder architecture is to provide a more expressive form of the encoded input sequence and allow the decoder to learn where to pay attention to the encoded input when generating each step of the output sequence.

    This extension of the architecture is called attention. The Encoder-Decoder architecture with attention is popular for a suite of natural language processing problems that generate variable length output sequences, such as text summarization. The application of architecture to text summarization is as follows:

    • Encoder: The encoder is responsible for reading the source document and encoding it to an internal representation.
    • Decoder: The decoder is a language model responsible for generating each word in the output summary using the encoded representation of the source document.

    N-Gram models

    N-gram models are probabilistic models that assign probabilities on the “next” word in a sequence, given the n-1 previous words. This algorithm takes in an array of Strings (the songs in our corpus), and uses punctuation to select beginning and end tokens on each sentence. Baking lyrics uses a trigram model, since it calculates the frecuencies in which every three-word combination appear on each band’s corpus, and extrapolates the probabilities from there.

    FAQ

    There is always something that breaks

    TensorFlow

    If needed, different wheels (*.whl) for TF could be found here: TensorFlow. Use it to upgrade the requirements file accordingly.

    Environment Variables

    If using flask run

    flask run

    1. PROD: PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/production.py’);FLASK_ENV=production;FLASK_RUN_PORT=8003
    2. TEST: PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/testing.py’);FLASK_ENV=testing;FLASK_RUN_PORT=8001
    3. DEV:PYTHONUNBUFFERED=1;FLASK_APP=baking.main:create_app(r’config/development.py’);FLASK_RUN_PORT=8000;FLASK_DEBUG=0;FLASK_ENV=development. Feel free to enable debug mode in DEV

    If using manage run

    python manage.py run

    1. TEST: PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/testing.py
    2. DEV: PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/development.py Feel free to enable debug mode in DEV

    To run Tests:

    python manage.py test

    • PYTHONUNBUFFERED=1;APP_CONFIG_FILE=config/testing.py
    Visit original content creator repository https://github.com/arukavina/baking-lyrics
  • xsens_mtw_driver-release

    ROS Xsens MTw Awinda Driver

    ROS driver for the Xsens (rebranded as Movella) MTw Awinda Kit. Xsens SDK 4.6.

    Please consider the xsens_mtw_driver_ros2 for ROS2.

    Usage

    • Connect the Awinda Station USB in your computer and run: ros2 run xsens_mtw_driver awinda_manager

    • Undock the MTW sensor and wait until the wireless connection being established:

    [ INFO] [1565393292.619168658]: Waiting for MTW to wirelessly connect...
    [ INFO] [1565393436.611962400]: EVENT: MTW Connected -> 00342322
    [ INFO] [1565393436.615162761]: Number of connected MTWs: 1. Press 'y' to start measurement or 'q' to end node.
    
    
    • Each MTw sensor will connect at once. Remember, as described on the Xsens MTw User Manual:

    Number of sensors Update rate (Hz)
    1 150
    2 120
    4 100
    6 75
    12 50
    18 40

    Troubleshooting

    • Make sure you are in the correct group:

    $ ls -l /dev/ttyUSB0
    
    crw-rw---- 1 root dialout 188, 0 May  6 16:21 /dev/ttyUSB0
    
    $ groups
    
    "username" adm cdrom sudo dip plugdev lpadmin sambashare
    
    • Add yourself to the group:

    $ sudo usermod -G dialout -a $USER
    $ newgrp dialout
    

    Visit original content creator repository
    https://github.com/qleonardolp/xsens_mtw_driver-release

  • simple-sparkline-chart

    📈 Simple SparkLine Chart

    A lightweight, customizable, and easy-to-use SparkLine chart library with tooltip and flexible options, written in TypeScript.

    DEMO

    CodePen 👀

    Production 👀

    Library

    NPMJS 📦

    UNPKG </>

    GITHUB 🔮

    📈 Simple SparkLine Chart

    🚀 Features

    • 🔥 Lightweight – Small footprint and blazing fast rendering.
    • 🎨 Customizable – Control colors, sizes, tooltips, and more.
    • 🎯 TypeScript Support – Fully typed for better development experience.
    • 📦 Multiple Formats – Available as CommonJS, ESModule, and a global script for easy CDN usage.

    📦 Installation

    Using npm:

    npm install simple-sparkline-chart
    

    Using CDN:

    <script src="https://www.unpkg.com/simple-sparkline-chart"></script>

    You can then use it via the global SimpleSparkLineChart:

    <script>
      new SimpleSparkLineChart(".chart");
    </script>

    📚 Usage

    1️⃣ Basic Example

    <div
      class="sparkline"
      data-values="1,2,3,4,5,6,7"
      data-width="200"
      data-height="40"
    ></div>
    
    <script>
      new SimpleSparkLineChart(".sparkline");
    </script>

    This will create a basic SparkLine chart using the specified data-values.

    2️⃣ With Custom Options

    <div
      class="sparkline"
      data-values="0.5,1.5,2.3,3.8,2.9,3.4"
      data-width="300"
      data-height="60"
      data-color-stroke="#00f"
      data-filled="0.3"
      data-tooltip="top"
      data-aria-label="My SparkLine Chart"
    ></div>
    
    <script>
      new SimpleSparkLineChart(".sparkline");
    </script>

    🔧 Data Attributes

    Attribute Type Default Description
    data-values string null (Required) A comma-separated list of values or JSON data to plot.
    data-width number 200 The width of the chart in pixels.
    data-height number Proportional to width The height of the chart in pixels. Automatically calculated based on width, maintaining a proportional aspect ratio.
    data-color-stroke string #8956ff The color of the chart line (stroke).
    data-filled number (none) Defines the opacity of the fill area under the line if set. If not provided, no fill is displayed.
    data-tooltip string top Tooltip position: “top” or “bottom”. Tooltip is enabled if this attribute is set.
    data-aria-label string Simple SparkLine Chart Accessible label for the chart.
    data-locale string User’s locale The locale used for formatting dates in tooltips (if using timestamp data).

    🧑‍💻 API

    You can initialize the chart with the SimpleSparkLineChart constructor, and it automatically processes all matching elements.

    Constructor

    new SimpleSparkLineChart(selector: string);

    • selector: A CSS selector string to target the elements where the chart will be rendered.

    🎨 Customization

    You can customize the following:

    1. Stroke and Fill: Set your own colors for the line and the area below it with data-color-stroke and data-filled.
    2. Dimensions: Control the width and height of the chart using data-width and data-height.
    3. Tooltips: Enable or disable tooltips with data-tooltip, and adjust their position with data-tooltip-position.

    📊 Example of Object Data

    You can pass an array of objects with timestamps and values:

    <div
      class="sparkline"
      data-values='[
            {"timestamp":1693526400000,"value":0.93},
            {"timestamp":1693612800000,"value":0.9315}
        ]'
      data-tooltip="top"
    ></div>
    
    <script>
      new SimpleSparkLineChart(".sparkline");
    </script>

    🚀 Optimized for Performance

    • Minimized for production: The library is optimized to deliver minimal JS overhead.
    • Supports all modern browsers: Works in all major browsers including Chrome, Firefox, Safari, and Edge.

    🔥 CDN Usage

    For quick usage without installing npm dependencies:

    <script src="https://www.unpkg.com/simple-sparkline-chart"></script>

    🔧 Development

    To build the project locally:

    Install dependencies

    npm install
    

    Run the development server

    npm start
    

    Build the project

    npm run build
    

    Run tests

    npm run test
    

    📝 License

    This project is licensed under the MIT License – see the LICENSE file for details.

    💬 Feedback and Contributions

    Feel free to open an issue if you find a bug or have a feature request. Pull requests are welcome! 🙌

    Hope you enjoy using Simple SparkLine Chart! 🚀✨

    Visit original content creator repository https://github.com/dejurin/simple-sparkline-chart