Category: Blog

  • aws-dynamodb

    Serverless Components

    Click Here for Version 1.0

    AWS DynamoDB Component ⎯⎯⎯ The easiest way to deploy & manage AWS DynamoDB tables, powered by Serverless Components.

    • Minimal Configuration – With built-in sane defaults.
    • Fast Deployments – Create & update tables in seconds.
    • Team Collaboration – Share your table outputs with your team’s components.
    • Easy Management – Easily manage and monitor your tables with the Serverless Dashboard.

    Check out the Serverless Fullstack Application for a ready-to-use boilerplate and overall great example of how to use this Component.

    1. Install
    2. Initialize
    3. Deploy
    4. Configure
    5. Develop
    6. Monitor
    7. Remove

     

    1. Install

    To get started with component, install the latest version of the Serverless Framework:

    $ npm install -g serverless
    

    After installation, make sure you connect your AWS account by setting a provider in the org setting page on the Serverless Dashboard.

    2. Initialize

    The easiest way to start using the aws-dynamodb component is by initializing the aws-dynamodb-starter template. Just run this command:

    $ serverless init aws-dynamodb-starter
    $ cd aws-dynamodb-starter
    

    3. Deploy

    Once you have the directory set up, you’re now ready to deploy. Just run the following command from within the directory containing the serverless.yml file:

    $ serverless deploy
    

    Your first deployment might take a little while, but subsequent deployment would just take few seconds. For more information on what’s going on during deployment, you could specify the --debug flag, which would view deployment logs in realtime:

    $ serverless deploy --debug
    

    4. Configure

    The aws-dynamodb component requires minimal configuration with built-in sane defaults. Here’s a complete reference of the serverless.yml file for the aws-dynamodb component:

    component: aws-dynamodb          # (required) name of the component. In that case, it's aws-dynamodb.
    name: my-table                   # (required) name of your instance.
    org: serverlessinc               # (optional) serverless dashboard org. default is the first org you created during signup.
    app: myApp                       # (optional) serverless dashboard app. default is the same as the name property.
    stage: dev                       # (optional) serverless dashboard stage. default is dev.
    
    inputs:
      name: my-table
      attributeDefinitions:
        - AttributeName: id
          AttributeType: S
        - AttributeName: attribute1
          AttributeType: N
      keySchema:
        - AttributeName: id
          KeyType: HASH
        - AttributeName: attribute1
          KeyType: RANGE
      localSecondaryIndexes:
        - IndexName: 'myLocalSecondaryIndex'
          KeySchema:
            - AttributeName: id
              KeyType: HASH
            - AttributeName: attribute2
              KeyType: RANGE
          Projection:
            ProjectionType: 'KEYS_ONLY'
      globalSecondaryIndexes:
        - IndexName: 'myGlobalSecondaryIndex'
          KeySchema:
            - AttributeName: attribute2
              KeyType: HASH
          Projection:
            ProjectionType: 'ALL'
      region: us-east-1

    Once you’ve chosen your configuration, run serverless deploy again (or simply just serverless) to deploy your changes. Please keep in mind that localSecondaryIndexes cannot be updated after first deployment. This is an AWS limitation. Also note that this component exclusively uses the Pay Per Request pricing, which scales on demand like any serverless offering.

    5. Develop

    Instead of having to run serverless deploy everytime you make changes you wanna test, you could enable dev mode, which allows the CLI to watch for changes in your configuration file, and deploy instantly on save.

    To enable dev mode, just run the following command:

    $ serverless dev
    

    6. Monitor

    Anytime you need to know more about your running aws-dynamodb instance, you can run the following command to view the most critical info.

    $ serverless info
    

    This is especially helpful when you want to know the outputs of your instances so that you can reference them in another instance. It also shows you the status of your instance, when it was last deployed, and how many times it was deployed. You will also see a url where you’ll be able to view more info about your instance on the Serverless Dashboard.

    To digg even deeper, you can pass the --debug flag to view the state of your component instance in case the deployment failed for any reason.

    $ serverless info --debug
    

    7. Remove

    If you wanna tear down your entire aws-dynamodb infrastructure that was created during deployment, just run the following command in the directory containing the serverless.yml file.

    $ serverless remove
    

    The aws-dynamodb component will then use all the data it needs from the built-in state storage system to delete only the relavent cloud resources that it created. Just like deployment, you could also specify a --debug flag for realtime logs from the website component running in the cloud.

    $ serverless remove --debug
    

    Visit original content creator repository

  • Chest-X-Ray-Medical-Diagnosis-with-Deep-Learning

    Chest-X-Ray-Medical-Diagnosis-with-Deep-Learning

    Diagnose 14 pathologies on Chest X-Ray using Deep Learning. Perform diagnostic interpretation using GradCAM Method

    Project Description

    This project is a complilation of several sub-projects from Coursera 3-course IA for Medical Specialization. The objective is to use a deep learning model to diagnose pathologies from Chest X-Rays.

    The project uses a pretrained DenseNet-121 model able to diagnose 14 labels such as Cardiomegaly, Mass, Pneumothorax or Edema. In other words, this single model can provide binary classification predictions for each of the 14 labeled pathologies.

    Weight normalization is performed to offset the low prevalence of the abnormalities among the dataset of X-Rays (class imbalance).

    Finally the GradCAM technique is used to highlight and visualize where the model is looking, which area of interest is used to make the prediction. This is a tool which can be helpful for discovery of markers, error analysis, training and even in deployment.

    Dataset

    The project uses chest x-ray images taken from the public ChestX-ray8 dataset. This dataset contains 108,948 frontal-view X-ray images of 32,717 unique patients. Each image in the data set contains multiple text-mined labels identifying 14 different pathological conditions. These in turn can be used by physicians to diagnose 8 different diseases. For the project we have been working with a ~1000 image subset of the images.

    • 875 images to be used for training.
    • 109 images to be used for validation.
    • 420 images to be used for testing.

    The dataset includes a CSV file that provides the ground truth labels for each X-ray.

    DenseNet highlights

    DenseNet was introduced in 2017 in an award-winning paper by Gao Huang et al. 2018 called Densely Connected Convolutional Networks. The model was able to outperform previous architectures like ResNet (which I covered in a another project Skin Cancer AI dermatologist).

    Regardless of the architectural designs of these networks, they all try to create channels for information to flow between the initial layers and the final layers. DenseNet, with the same objective, create paths between the layers of the network. Parts of this summary are can be found in this review.

    • DenseNet key novelty: Densenet is a convolutional network where each layer is connected to all other layers that are deeper in the network
      • The first layer is connected to the 2nd, 3rd, 4th etc.
      • The second layer is connected to the 3rd, 4th, 5th etc.

    Each layer in a dense block receives feature maps from all the preceding layers, and passes its output to all subsequent layers. Feature maps received from other layers are fused through concatenation, and not through summation (like in ResNets). Extracted feature maps are continuously added together with previous ones which avoids redundant and duplicate work.

    This allows the network to re-use learned information and be more efficient. Such networks require fewer layers. State of the art results are achieved with as low as 12 channel feature maps. This also means the network has fewer parameters to learn and is therefore easier to train. Amongst all variants, DenseNet-121 is the standard one.

    Key contributions of the DenseNet architecture:

    • Alleviates vanishing gradient problem ( as networks get deeper, gradients aren’t back-propagated sufficiently to the initial layers of the network. The gradients keep getting smaller as they move backwards into the network and as a result, the initial layers lose their capacity to learn the basic low-level features)
    • Stronger feature propagation
    • Feature re-use
    • Reduced parameter count

    DenseNet architecture

    DenseNet is composed of Dense blocks. In those blocks, the layers are densely connected together: Each layer receive in input all previous layers output feature maps. The DenseNet-121 comprises 4 dense blocks, which themselves comprise 6 to 24 dense layers.

    • Dense block: A dense block comprises n dense layers. These dense layers are connected such that each dense layer receives feature maps from all preceding layers and passes it’s feature maps to all subsequent layers. The dimensions of the features (width, height) stay the same in a dense block.

    • Dense layer: Each dense-layer consists of 2 convolutional operations.
      • 1 X 1 CONV (conventional conv operation for extracting features)
      • 3 X 3 CONV (bringing down the feature depth/channel count)

    The CONV layer corresponds to the sequence BatchNorm->ReLU->Conv. A layer has each sequence repeated twice, the first with 1×1 Convolution bottleneck producing: grow rate x 4 feature maps, the second with 3×3 convolution. The authors found that the pre-activation mode (BN and ReLU before the Conv) was more efficient than the usual post-activation mode.

    The growth rate (k= 32 for DenseNet-121) defines the number of output feature maps of a layer. Basically the layers output 32 feature maps which are added to a number of 32 feature maps from previous layers. While the depth increases continuously, each layer bring back the depth to 32.

    • Transition layer: In between dense blocks, you find Transition layer. Instead of summing the residual like in ResNet, DenseNet concatenates all the feature maps. A transition layer is made of: Batch Normalization -> 1×1 Convolution -> Average pooling. Transition layers between two dense blocks ensure the down-sampling role (x and y dimensions halved), essential to CNN. Transition layers also compress the feature map and reduce the channels by half. This contributes to the compactness of the network.

    Although Concatenating generates a lot of input channels, DenseNet’s convolution generates a low number of feature maps (The authors recommend 32 for optimal performance but world-class performance was achieved with only 12 output channels).

    Key benefits:

    • Compactness. DenseNet-201 with 20M parameters yields similar validation error as a 101-layer ResNet with 45M parameters.
    • The learned features are non-redundant as they are all shared through a common knowledge.
    • Easier to train because the gradient is flowing back more easily thanks to the short connections.

    Model settings

    In this project, the model uses 320 x 320 X-Rays images and outputs predictions for each of the 14 pathologies as illustrated below on a sample image.

    Environment and dependencies

    In order to run the model, I used an environment with tensorflow 1.15.0 and Keras 2.1.6. Model weights are provided in the repo.

    Results

    I used a pre-trained model which performance can be evaluated using the ROC curve shown at the bottom. The best results are achieved for Cardiomegaly (0.9 AUC), Edema (0.86) and Mass (0.82). Ideally we want to be significantly closer to 1. You can check out below the performance from the ChexNeXt paper and their model as well as radiologists on this dataset.

    Looking at unseen X-Rays, the model correctly predicts the predominant pathology, generating a somehow accurate diagnotic, highlighting the key region underlying its predictions. In addition to the main diagnostic (highest prediction), the model also predicts secondary issues similarly to what a radiologist would comment as part of his analysis. This can be either false positive from noise captured in the X-rays or cumulated pathologies.

    The model correctly predicts Cardiomegaly and absence of mass or edema. The probability for mass is higher, and we can see that it may be influenced by the shapes in the middle of the chest cavity, as well as around the shoulder.

    The model picks up the mass near the center of the chest cavity on the right. Edema has a high score for this image, though the ground truth doesn’t mention it.

    Here the model correctly picks up the signs of edema near the bottom of the chest cavity. We can also notice that Cardiomegaly has a high score for this image, though the ground truth doesn’t include it. This visualization might be helpful for error analysis; for example, we can notice that the model is indeed looking at the expected area to make the prediction.

    Performance from the ChexNeXt paper (model as well as radiologists):

    Visit original content creator repository
  • ulid-generator-rs

    ulid-generator-rs

    A Rust crate for generating ULIDs.

    Workflow Status crates.io docs.rs dependency status tokei

    Install to Cargo.toml

    Add this to your Cargo.toml:

    [dependencies]
    ulid-generator-rs = "<<version>>"

    About ULID

    ULID is Universally Unique Lexicographically Sortable Identifier.

    For more information, please check the following specifications.

    Usage

    use ulid_generator_rs::{ULIDGenerator, ULID};
    
    let mut generator: ULIDGenerator = ULIDGenerator::new();
    let ulid: ULID = generator.generate().unwrap();
    let str: String = ulid.to_string();
    println!("{}", str); // "01ETGRM6448X1HM0PYWG2KT648"

    Alternative crates

    Benchmarks

    gen_ulid_and_to_string/j5ik2o/ulid-generator-rs/gen_to_str/0
    time:   [117.15 ns 117.26 ns 117.39 ns]
    change: [-1.7662% -0.9620% -0.3349%] (p = 0.00 < 0.05)
    Change within noise threshold.
    Found 3 outliers among 100 measurements (3.00%)
    2 (2.00%) high mild
    1 (1.00%) high severe
    
    gen_ulid_and_to_string/dylanhart/ulid-rs/gen_to_str/0
    time:   [115.63 ns 115.81 ns 116.04 ns]
    change: [-1.0856% -0.8741% -0.6850%] (p = 0.00 < 0.05)
    Change within noise threshold.
    Found 4 outliers among 100 measurements (4.00%)
    2 (2.00%) high mild
    2 (2.00%) high severe
    
    gen_ulid_and_to_string/huxi/rusty_ulid/gen_to_str/0
    time:   [126.32 ns 126.46 ns 126.60 ns]
    change: [-0.4696% -0.3016% -0.1476%] (p = 0.00 < 0.05)
    Change within noise threshold.
    Found 2 outliers among 100 measurements (2.00%)
    2 (2.00%) high mild
    
    gen_ulid_and_to_string/suyash/ulid-rs/gen_to_str/0
    time:   [157.22 ns 157.35 ns 157.49 ns]
    change: [-1.6453% -1.4630% -1.2639%] (p = 0.00 < 0.05)
    Performance has improved.
    Found 4 outliers among 100 measurements (4.00%)
    3 (3.00%) high mild
    1 (1.00%) high severe
    

    License

    Licensed under either of

    at your option.

    Contribution

    Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

    Visit original content creator repository
  • calcanim

    Calcanim

    Este es un repositorio donde encontrarás todos los códigos usados para generar las animaciones de la lista de reproducción Calcanim de nuestro canal de YouTube Animathica. Las animaciones están hechas con Manim.

    ¡Te invitamos a descargar y modificar nuestros archivos! Para que puedas generar tus videos después de modificar un archivo, será necesario que tengas una instalación completa y estable de Manim. Te recomendamos los siguientes tutoriales:

    Windows:

    Linux:

    macOS:

    Para que nuestros archivos se puedan ejecutar bien, es necesario que instales la última versión de Manim. Además, en el archivo tex_template.tex en la carpeta manimlib, debes modificar el paquete babel de english a spanish.

    Si lo prefieres, puedes usar esta aplicación en línea que te permitirá generar tus videos:
    https://eulertour.com/gallery

    Temario

    Introducción al cálculo multivariable:

    Espacios vectoriales:

    Sucesiones:

    Topología de R^n:

    Límite y continuidad en funciones multivariable:

    Calculo diferencial en curvas:

    Cálculo diferencial de superficies

    Extra:

    Funciones de R^n a R^m:

    Teoremas de diferenciabilidad:

    Integral de volumen:

    Integral de línea:

    • Definición
    • Teoremas fundamentales
    • Rotacional en R^2
    • Teorema de Green
    • Rotacional en R^3

    Integral de superficie:

    • Definición
    • Stokes
    • Divergencia
    • Gauss

    Visit original content creator repository

  • lyricsFinder

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    yarn start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    yarn test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    yarn build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    yarn eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    About App

    This app is mainly created in admiration of the learning of context component of the reactJs.

    • libaray used in it:
    1. axios
    2. ant-design
    3. react-router-dom

    axious

    • To get data from backend in Lifecycle Hooks i.e

    componentDidMount(){
      const promis = axios.get('url')
    }
    • requests of axios

    axios
      .get('url')
      .then(function(respose) {
        //handle success
      })
      .catch(function(error) {
        //handle errors
      });

    context or context API

    • React’s context allows you to share information to any components, without any help of props.
    • Context provides a way to pass data through the component tree without having to pass props down manually at every level.

    Create file of context.jsx in root path

    • context component:
    const Context = React.createContext();
    • There are two export component :
    1. class Provider

    For adding in root file App.js
    Changing state by using the dispatch redux property

    export class Provider extends Component{
      state={
        data:[]
        dispatch:action => this.setState(state => reducer(state,action))
         // you have to define or use this element in other file with the same 'type' component in it and after that help of payload we can change the state.
      };
      componentDidMound(){
        //if you want ot change state in file by 'setState'
      }
      render(){
        return (
          <Context.Provider value={this.state}>
          {this.props.childern}
          </Context.Provider>
        );
      }
    }

    reducer component:

    const reducer = (state, action) => {
      switch (action.type) {
        case 'objcet_in_type':
          return {
            ...state,
            data: action.payload // payload is the change data that comes from the another file where the 'Consumer' used.
          };
        default:
          return state;
      }
    };
    1. const Consumer

    For adding in file where we can use the states or values that provide by the provider.

    export const Consumer = Context.Consumer;

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    yarn build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository

  • HealthConnect

    HealthConnect

    Contribute:

    To contribute something to HealthConnect, please refer to our contributing document

    Features:

    Open Source Medical IoT Application. Use any device – ESP32/ESP8266 Dev Board, Raspberry Pi, Smart Phone – connect the sensors and add your device on your account. Then view your medical sensor data sent to the cloud in REAL TIME.

    • Ability to access Patient Data remotely (Dashboard)
    • Digital Multi Para Monitor
    • Schedule appointments based on Doctor’s calendar
    • AI Symptom Checking ChatBot for quick queries
    • Order medicines according to Doctor’s consultancy
    • Use digital notes provided by nurse/doctor as instructions related to health.
    • Quick updated helpline numbers to access nearest Hospital/Ambulance

    From this Project, we are trying to analyze the problems faced by people while performing their tests and finding a diagnostics solution for it after the results of the lab tests are given.

    All these tests need not be taken in the hospitals, an IoT device, whose prototype that we have built can track and upload the data to the cloud. This data can be analyzed on a Machine learning Algorithm and cross-reference to find the accurate anomalies in the patient’s body.

    These could include infection, disorders, diseases, or any health condition which is unlikely in usual cases.

    The focus is on having a portable ICU, with which the medical help can be reached to the people remotely.

    Get Started:

    1. Visit the SignUp Page and Create your Account.
    2. Now visit Login Page and login.
    3. View existing sample/dummy data on the portal.
    4. Explore the features on sidebar, and view sample vitals on the Dashboard and Diagnostics.
    5. To view your own data or realtime data sample – you’ll have to add you device to the cloud.
    6. Click on Medical Devices on the sidebar, and follow the instructions to Add your Device.
    7. View RealTime health vitals of your your body on the Dashboard and Diagnostics.

    HealthConnect Portal Interface (Patient):

    Dashboard View

    image

    Digital Multi Para Monitor

    image

    Medical Device Control Panel

    image

    Diagnose Report with Prescription

    image

    Calendar Appointments

    image

    HealthCare Visit

    image

    Symptom Check (AI Bot)

    image

    HealthConnect Portal Interface (Admin):

    Dashboard View

    WhatsApp Image 2022-03-05 at 10 15 21 PM

    Visit original content creator repository
  • flagsmith

    Feature Flag, Remote Config and A/B Testing platform, Flagsmith

    Stars Docker Pulls Docker Image Size Join the Discord chat Coverage License Built with Depot

    Try our interactive demo

    Try our interactive demo

    Flagsmith is an Open-Source Feature Flagging Tool to Ship Faster & Control Releases

    Change the way your team releases software. Roll out, segment, and optimise—with granular control. Stay secure with on-premise and private cloud hosting.

    • Feature flags: Release features behind the safety of a feature flag
    • Make changes remotely: Easily toggle individual features on and off, and make changes without deploying new code
    • A/B testing: Use segments to run A/B and multivariate tests on new features
    • Segments: Release features to beta testers, collect feedback, and iterate
    • Organisation management: Stay organised with orgs, projects, and roles for team members
    • SDKs & frameworks: Choose from 15+ popular languages like Typescript, .NET, Java, and more. Integrate with any framework, including React, Next.js, and more
    • Integrations: Use your favourite tools with Flagsmith

    Flagsmith makes it easy to create and manage feature flags across web, mobile, and server side applications. Just wrap a section of code with a flag, and then use Flagsmith to toggle that feature on or off for different environments, users or user segments.

    Get up and running in less than a minute:

    curl -o docker-compose.yml https://raw.githubusercontent.com/Flagsmith/flagsmith/main/docker-compose.yml
    docker-compose -f docker-compose.yml up

    The application will bootstrap an admin user, organisation, and project for you. You’ll find a link to set your password in your Compose logs:

    Superuser "admin@example.com" created successfully.
    Please go to the following page and choose a password: http://localhost:8000/password-reset/confirm/.../...

    Flagsmith Screenshot

    Flagsmith Open Source

    The Flagsmith repository is comprised of two core components – the REST API and the frontend dashboard.

    Further documentation for these can be found at:

    Flagsmith hosted SaaS

    You can try our hosted version for free at https://flagsmith.com

    Community Resources + Contribution Guidelines

    We love contributions from the community and are always looking to improve! Here are our contribution guidelines.

    Open Source Philosophy

    The majority of our platform is open source under the BSD-3-Clause license. A small number of repositories are under the MIT license.

    We built Flagsmith as the open source feature flag tool we needed but couldn’t find on GitHub. Our core functionality stays open, always. Read our open letter to developers.

    Open Source vs Paid

    With our core functionality being open, you can use our open-source feature flag and remote config management platform no matter what. Enterprise-level governance and management features are available with a valid Flagsmith Enterprise license.

    To learn more, contact us or see our version comparison.

    Contributors

    Thank you to the open source community for your contributions and for building this with us!

    Made with contrib.rocks.

    Visit original content creator repository
  • CoreGPX

    Parse and generate GPX files easily on iOS, watchOS & macOS.

    What is CoreGPX?

    CoreGPX is a port of iOS-GPX-Framework to Swift language.

    CoreGPX currently supports all GPX tags listed in GPX v1.1 schema, along with the recent addition of GPX v1.0 support. It can generate and parse GPX compliant files on iOS, macOS and watchOS.

    As it makes use of XMLParser for parsing GPX files, CoreGPX is fully dependent on the Foundation API only.

    Features

    • Successfully outputs string that can be packaged into a GPX file
    • Parses GPX files using native XMLParser
    • Support for iOS, macOS & watchOS
    • Supports Codable in essential classes
    • Enhanced full support for GPXExtensions for both parsing and creating.
    • Lossy GPX compression. Check out GPXCompressor for an implementation of this new feature.
    • (new) Legacy GPX support. (GPX 1.0 and below)

    Documentation

    CoreGPX is documented using jazzy.

    Documentation Status

    You can read the documentation here, which documents most of the important features that will be used for parsing and creating of GPX files.

    Installation

    CoreGPX supports CocoaPods, Carthage, as well as Swift Package Manager, such that you can install it, any way you want.

    To install using CocoaPods, simply add the following line to your Podfile:

    pod 'CoreGPX'

    CoreGPX works with Carthage as well, simply add the following line to your Cartfile:

    github "vincentneo/CoreGPX"

    How to use?

    Check out the wiki page for some basic walkthroughs of how to use this library.

    Alternatively, you may check out the Example app, by cloning the repo, pod install and running the Example project.

    To know in-depth of how CoreGPX can be used in a true production setting, please refer to awesome projects like iOS-Open-GPX-Tracker or Avenue GPX Viewer, both of which, uses CoreGPX.

    Extras

    Check out the Extras folder for some extra helper codes that may help you with using CoreGPX. Simply drag and drop it into your project to use.

    • GPX+CLLocation.swift: Converting CLLocation type to GPXWaypoint, GPXTrackPoint and more.

    Contributing

    Contributions to this project will be more than welcomed. Feel free to add a pull request or open an issue. If you require a feature that has yet to be available, do open an issue, describing why and what the feature could bring and how it would help you!

    Like the project? Check out these too!

    License

    CoreGPX is available under the MIT license. See the LICENSE file for more info.

    Visit original content creator repository
  • fastTravelCLI

         __           _  _____                     _   ___   __   _____ - -  -  -   -   -
        / _| ____ ___| |/__   \___  ______   _____| | / __\ / /   \_   \ - -  -  -   -   -
       | |_ / _  / __| __|/ /\/  _\/ _  \ \ / / _ \ |/ /   / /     / /\/  - -  -   -   -
       |  _| (_| \__ \ |_/ /  | | | (_| |\ V /  __/ / /___/ /___/\/ /_  - -  -  -   -   -
       |_|  \__._|___/\__\/   |_|  \__._| \_/ \___|_\____/\____/\____/ - -  -  -   -   -
    
    

    Latest Version License Go CLI

    Tests Deployment

    A better CLI navigation experience

    fastTravelCLI is a fast, lightweight, and feature rich CD command replacement.

    fastTravelCLI provides robust bookmarking, navigation history, useful fuzzy finders (powered by fzf), and more.

    fastTravelCLI is being continuously improved, check out the issues for new features, support, and integrations in the works.

    Installation

    Clone the repo, cd into it, and run the following based on your OS –

    bash install/linux.sh
    
    bash install/mac.sh
    

    Disclaimers

    Currently available for Unix-like OS and bash/zsh shells. May work in more shell environments but not guaranteed.
    Compiles using go version >= 1.20.0, may work with older versions but not guaranteed.

    Fuzzy finding features require tree and fzf.

    Usage

    # Go to a directory you would like to add a bookmark for and run
    ft -set [key]
    
    
    # You can also explicitly set a key to a directory, or set multiple at once
    ft -set key1=some/other/dir key2=./some/relative/path
    
    
    # Travel to a location by running
    ft [key]
    
    
    # fastTravelCLI evaluates keys to their absolute filepath, so you can do stuff like this
    ft [key]/some/subdir
    
    
    # ft can replace your cd command entirely and respects CDPATH
    ft relative/dir
    ft ..
    ft -
    ft mydir
    
    
    # To remove a bookmark run
    ft -rm [key]
    
    
    # To rename a bookmark run
    ft -rn [key] [new key]
    
    
    # ft allows you to visit previously visited directories in your current session
    ft [
    
    
    # Traverse back up your dir history using
    ft ]
    
    
    # fastTravelCLI has fzf integrations
    # The default behavior of ft is to pull up fzf with all your bookmarks
    ft
    
    # View immediate child directories in a given project (current project by default) in fzf
    ft -f
    ft -f mykey
    ft -f my/project/dir
    
    # View all child directories in a given project in fzf
    ft -fa
    ft -fa mykey
    ft -fa my/project/dir
    
    # You can also view your session history in fzf
    ft -hist
    
    
    # View all your bookmarks with
    ft -ls
    
    
    # fastTravelCLI accepts args piped to it and is highly scriptable
    echo "mykey=some/project/path" > myfile.txt
    cat myfile.txt | ft -set
    
    
    # if you change a directory's name on your machine, you can easily update fastTravelCLI
    ft -edit my/old/dirname newdirname
    
    
    # ft is easy to update to the latest release
    ft -update
    # or
    ft -u
    # you can also specify a specific version or latest
    ft -u v.0.2.92
    
    
    
    # To see a full list of available commands run
    ft -help
    # or
    ft -h
    
    # You can get more detailed help with a specific command
    ft -set -help
    # or
    ft -set -h

    Contributing

    PRs and feature suggestions are welcome. I originally made this for myself but if others find it useful and have feedback I’m open to it.

    Getting Started

    To set up the project locally for development, clone the repo and ensure you have the following installed –

    • Docker
    • go 1.22+
    • python 3.11+
    • lua 5.4+
    • GNU Make 4.4+

    Run all tests by running the default make command or make all.

    Documentation Site

    To run the docs site locally add a python virtual environment to the project.
    python3 -m venv venv

    Activate the virtual environment and install mkdocs and mkdocs-material.

    pip install mkdocs mkdocs-material

    You can then serve the site.

    make site
    Visit original content creator repository
  • StrainNet

    StrainNet

    StrainNet is a deep learning based method for predicting strain from images

    Teaser image

    Table of Contents

    Getting Started

    Set-up

    Begin by cloning this repository:

    git clone https://github.com/reecehuff/StrainNet.git
    cd StrainNet
    

    Next, install the necessary Python packages with Anaconda.

    conda create -n StrainNet python=3.9
    conda activate StrainNet
    pip install -r requirements.txt
    

    Finally, make sure that Python path is correctly set. The commmand

    which python
    

    should display the path to the Anaconda’s environment Python path, e.g., /opt/anaconda3/envs/StrainNet/bin/python

    Downloading pre-trained models and data

    To download the data and pretrained models for this project, you can use the download.sh script. This script will download the data and models from a remote server and save them to your local machine.

    Warning: The data is approximately 15 GB in size and may take some time to download.

    To download the data and models, run the following command:

    . scripts/download.sh
    

    This will download the data and models and save them to the current working directory. See the datasets for all of the ultrasound images (both synthetic and experimentally collected) and see the models folder for the pre-trained StrainNet models.

    Demo: Applying StrainNet to a Synthetic Test Case

    To see a demo of StrainNet in action, you can apply the model to a synthetic test case. The synthetic test case is a simulated image with known strains that can be used to test the accuracy of the model.

    To apply StrainNet to the synthetic test case, use the following command:

    . scripts/demo.sh
    

    You should now see a results folder with some plots of the performance on a synthetic test case where the largest strain is $4%$ (see the 04DEF in StrainNet/datasets/SyntheticTestCases/04DEF).

    Generating a training set

    For a full tutorial, see generateTrainingSet/README.md.

    Training StrainNet

    After generating a training, StrainNet can be trained. To train StrainNet, you will need to run the train.py script. This script can be invoked from the command line, and there are several optional arguments that you can use to customize the training process.

    Here is an example command for training StrainNet with the default settings:

    python train.py
    

    You can also adjust the training settings by specifying command-line arguments. For example, to change the optimizer and learning rate, you can use the following command:

    python train.py --optimizer SGD --lr 0.01
    

    Arguments

    Below is a list of some of the available command-line arguments that you can use to customize the training process:

    Argument Default Description
    --optimizer Adam The optimizer to use for training.
    --lr 0.001 The learning rate to use for the optimizer.
    --batch_size 8 The batch size to use for training.
    --epochs 100 The number of epochs to train for.
    --train_all False Whether to train all of the models.

    For a complete list of available command-line arguments and their descriptions, you can use the --help flag:

    python train.py --help
    

    Or examine the core/arguments.py Python script.

    Resuming training

    You can also resume training on models for StrainNet by specifying the --resume flag and the path to the pre-trained model. For example:

    python train.py --resume "path/to/dir/containing/model.pt"
    

    Training all models

    By default, train.py will only train one of the four models needed for StrainNet. To train all the models needed for StrainNet, you can use the train.sh script. This script will invoke the necessary training scripts and pass the appropriate arguments to them.

    To run the train.sh script, simply execute the following command from the terminal:

    . scripts/train.sh
    

    Viewing the progress of your training with Tensorboard

    By default, running train.py will write an events.out file to visualize the progress of training StrainNet with Tensorboard. After running train.py, locate the events.out in the newly-created runs folder.

    Viewing the Tensorboard Webpage

    To view the Tensorboard webpage, you will need to start a Tensorboard server. You can do this by running the following command in the terminal:

    tensorboard --logdir="path/to/dir/containing/events.out"
    

    Replace "path/to/dir/containing/events.out" with a path to a folder containing events.out file(s) (e.g., runs). This will start a Tensorboard server and print a message with a URL that you can use to access the Tensorboard webpage.

    To view the Tensorboard webpage, open a web browser and navigate to the URL printed by the Tensorboard server. This will open the Tensorboard webpage, which allows you to view various training metrics and graphs.

    Viewing the Tensorboard File in VSCode

    To view the Tensorboard events.out file in Visual Studio Code, you may use the Tensorboard command.

    1. Open the command palette (View → Command Palette… or Cmd + Shift + P on macOS)
    2. Type “Python: Launch Tensorboard” in the command palette and press Enter.
    3. Select Select another folder and select the runs folder to view events.out file(s).

    Evaluating the performance of StrainNet

    After training the model, you can evaluate its performance on a test dataset to see how well it generalizes to unseen data. To evaluate the model, you will need to have a test dataset in a format that the model can process.

    To evaluate the model, you can use the eval.py script. This script loads the trained model and the test dataset, and runs the model on the test data to compute evaluation metrics such as accuracy and precision.

    To run the eval.py script, use the following command:

    python eval.py --model_dir "path/to/trained/models" --val_data_dir "path/to/validation/data"
    

    Replace val_data_dir with the actual path to the trained models, and "path/to/validation/data" with the actual path to the validation data.

    Arguments

    You can see a list of all the available arguments for the eval.py script by using the --help flag:

    python eval.py --help
    

    Or examine the core/arguments.py Python script.

    Evaluating StrainNet on the synthetic test cases

    To apply the pretrained models to the synthetic test cases, you can use the eval.sh script. This script will invoke the necessary evaluation scripts and pass the appropriate arguments to them.

    To run the eval.sh script, simply execute the following command from the terminal:

    . scripts/eval.sh
    

    Citation

    @article{huff2024strainnet,
      title={Deep learning enables accurate soft tissue tendon deformation estimation in vivo via ultrasound imaging},
      author={Huff, Reece D and Houghton, Frederick and Earl, Conner C and Ghajar-Rahimi, Elnaz and Dogra, Ishan and Yu, Denny and Harris-Adamson, Carisa and Goergen, Craig J and O’Connell, Grace D},
      journal={Scientific Reports},
      volume={14},
      number={1},
      pages={18401},
      year={2024},
      publisher={Nature Publishing Group UK London}
    }
    

    LICENSE

    This project is licensed under the MIT License – see the LICENSE file for details.

    Visit original content creator repository