This is the backend client for the Amazon Web Services.
This client connects to the following services:
Run npm run compodoc to generate Compodoc documentation to the /documentation directory.
Copyright ©2019 by Ralf Kostulski.
This code is released under the by Ralf Kostulski Affero GPL. See the LICENSE file for more information.
This is an extension of django-oauth-toolkit that solves the lack of support for JWT.
JWT support for:
Unsupported:
Add to your pip requirements:
git+https://github.com/Humanitec/django-oauth-toolkit-jwt#egg=django-oauth-toolkit-jwt
In order to generate a RS256 (RSA Signature with SHA-256) public and private keys, execute the following:
$ ssh-keygen -t rsa -b 4096 -f jwtR465656.key # don't add passphrase
$ openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwt55256.key.pub
$ cat jwtRS5656.key
$ cat jwtRS2536.key.pub
Have Docker installed as a first step.
docker-compose -f docker-compose-dev.yml build
To run all the tests:
docker-compose -f docker-compose-dev.yml run --entrypoint '/usr/bin/env' --rm dot_jwt tox
To run the tests only for Python 2.7:
docker-compose -f docker-compose-dev.yml run --entrypoint '/usr/bin/env' --rm dot_jwt tox -e py27
Or to run just one test:
docker-compose -f docker-compose-dev.yml run --entrypoint '/usr/bin/env' --rm dot_jwt tox -- -x tests/test_views.py::PasswordTokenViewTest::test_get_enriched_jwt
Documentation page: _index.md
The partner front end service for BiFrost is [Midgard (Angular) and Midgard Core]:https://github.com/Humanitec/midgard and is configured to connect to the BiFrost core automatically and facilitate connections to addtional platform frontend and backend services.
Build first the images:
docker-compose build # --no-cache to force deps installation
To run the webserver
docker-compose up # -d for detached
User: user Password: xxxx.
All clients interact with our API using the OAuth2 protocol. In order to configure it, go to admin/oauth2_provider/application/ and add a new application there.
Search Function is configured through connected search service. https://github.com/humanitec/search_service
There are many other services and behaviours determined by the application's configuration. Revise bifrost/settings/base.py and configure your environment variables so all services work without failures.
For using JWT as authentication method, we need to configure public and private RSA keys.
The following commands will generate a public and private key. The private key will stay in BiFrost and the public one will be supplied to microservices in order to verify the authenticity of the message:
$ openssl genrsa -out private.pem 2048
$ openssl rsa -in private.pem -outform PEM -pubout -out public.pem
If you're getting an error in your local environment, it can be related to the social-core library. To solve this issue you need to execute the following step:
The following templates were created to easy the way to create tickets and help the developer.
Use the following template to create tickets for E-Mail:
From: [email_address]
To: [email_address]
Cc: [email_address]
Bcc: [email_address]
Reply-to: [email_address]
Subject: 'Title'
Body: 'Text message'(HTML)
Chat is an open source and very basic live chat app written by a Python developer. It is free and MIT licensed. You can use it in a SaaS-like model on the website or you can also create your own self-hosted copy of the app.
Important: Remember about Meteor settings file. The contents should look like:
{
"public": {
"hostName": "localhost:3000 (or your-host-name.com in the prod)",
"maxClientApps": 3,
"maxChatHistoryInDays": 3,
"ga": {
"account": "UA-********-*" (your google analytics code - optional)
}
},
"private": {
"mainAppEmail": "your-email-address@gmail.com",
"mailGun": "smtp://{Default SMTP Login}:{Default Password}@{SMTP Hostname}:587",
"google": {
"clientId": "{your google API client id here}",
"secret": "{your google API secret key here}"
},
"facebook": {
"appId": "{your facebook API app id here}",
"secret": "{your facebook API secret key here}"
}
}
}
Read more about it here: https://www.chatapp.support/docs#self-hosted-option
MIT
🍺 Buy me a beer!
Build the image:
docker-compose build
Run the web server:
docker-compose up
Open your browser with URL http://localhost:8080.
Run the tests:
docker-compose run --entrypoint '/usr/bin/env' --rm collection_service bash run-tests.sh
The Elvis image recognition integration is a bridge between Elvis DAM and Artificial Intelligence (AI) image recognition services from Google and Amazon. It uses these services to detect tags, landmarks and do facial analysis. The gathered information is stored as searchable metadata in Elvis. Tags can also be automatically translated to other languages. The integration supports two tagging modes: on demand tagging of images that already exist in Elvis and auto tagging of images immediately after they are imported.
This readme describes how to setup the integration. Please read this blog article if you want to know more about Elvis and AI.
The integration consist of several components. The main component is the image recognition server app. The integrated AI services are not identical in the functionality they provide, this is what this integration supports per AI provider:
Google Vision
AWS Rekognition
This document link takes you to the description of the high level installation steps. Detailed configuration information is embedded in the various configuration files.
This API has no build in authentication mechanism. There are however several ways to protect it:
As explained in the architecture overview, the image recognition server sends preview images to the configured AI vendors. These vendors all have their own privacy policies when it comes to data usage and storage. Some of them use your data to improve machine learning services and for analytics. For details, please consult the privacy policy of your AI vendor(s):
Build the image:
docker-compose build
Run the web server:
docker-compose up
Open your browser with URL http://localhost:8080. For the admin panel http://localhost:8080/admin (user: admin, password: admin).
Run the tests only once:
docker-compose run --rm --entrypoint 'bash scripts/run-tests.sh' extension_service
Run the tests and leave bash open inside the container, so it's possible to re-run the tests faster again using bash scripts/run-tests.sh [--keepdb]:
docker-compose run --rm --entrypoint 'bash scripts/run-tests.sh --bash-on-finish' extension_service
To run bash:
docker-compose run --rm --entrypoint 'bash' extension_service
This is a blueprint client written in Angular.
This client connects to the following services:
Document the ways in which this client connects to the service. Methods used, data models used, endpoints used, etc.
Run npm run compodoc to generate Compodoc documentation to the /documentation directory.
Copyright ©2019 Humanitec GmbH.
This code is released under the Humanitec Affero GPL.
An authentication micro-service written in Java.
Legal auth-service can be used to secure a single application instance or multiple instance with single-sign-on (SSO).
An Legal auth-service enabled system has the following containers:
The instructions below make the following assumptions:
Add this repository as a git submodule, and put the following in your docker-compose.yml configuration:
auth-serviceb:
build: auth-service
links:
- database
auth-web:
build: auth-service/web
links:
- auth-service
- server:INSTANCE_ID
You can also set configuration options as environment variables here. See auth-service.config for availableoptions.
Example:
environment:
- log=true
- TWILIO_ACCOUNT=<account>
- TWILIO_TOKEN=<token>
- TWILIO_SOURCE=<twilio source phone number>
Users can be added by running the adduser command in the auth-service container:
docker exec -it app_auth_1 auth-service adduser "My Name" my_password my_email@example.com
Login passwords can be changed by calling /api/change-password :
POST /api/change-password
{"oldPasword":"myOldPassword","newPassword":"myNewPassword"}"
Other changes can be done with SQL directly.
Authentication is done via the /api/login endpoint.
To log in without an OTP (either because it's not known yet or not required):
POST /api/login
{ "user": "my_email@example.com", "password": "my_password" }
On success, the call will return a token (see "session token" section) if no OTP is required. Otherwise it will return an error:
'{"error":"One time password required"}'
and the OTP will be sent out to the user
To log in with an OTP:
POST /api/login
{ "user": "my_email@example.com", "password": "my_password", , "otp":"myOTP" }
on success, a session token will be returned (see next section)
On successful login, a session token is returned in the following ways:
Copyright © 2015-2017 Cinderella. All rights reserved.
Twilio and Twiml are registered trademarks of Twilio and/or its affiliates. Other names may be trademarks of their respective owners.
The location service enables your application to store and group international addresses. It exposes the SiteProfile model with a flexible schema for location data and the ProfileType model to classify the SiteProfiles.
A SiteProfile is a representation of a location. It has the following properties:
A ProfileType helps grouping SiteProfiles together. It has the following properties:
Click here for the full API documentation.
You must have Docker installed.
Build the Docker image:
docker-compose build
Run a web server with this service:
docker-compose up
Now, open your browser and go to http://localhost:8080.
For the admin panel, go to http://localhost:8080/admin (user: admin, password: admin).
The local API documentation can be consulted in http://localhost:8080/docs.
To run the tests once:
docker-compose run --rm --entrypoint 'bash scripts/run-tests.sh' location_service
To run the tests and leave bash open inside the container so that it's possible to re-run the tests faster again using bash scripts/run-tests.sh [--keepdb]:
docker-compose run --rm --entrypoint 'bash scripts/run-tests.sh --bash-on-finish' location_service
To run bash:
docker-compose run --rm --entrypoint 'bash' location_service
If you would like to clean the database and start the application, do:
docker-compose up --renew-anon-volumes --force-recreate --build
Click here to go to the full API documentation.
Copyright ©2019 Humanitec GmbH.
This code is released under the Humanitec Affero GPL.
Your martial art assistant 2.0
The objective is assisting martial arts practitioners on the life-long pursuit of their chosen martial art.
The idea is pretty simple : giving them an application that assist them into daily practice out of the dojo.
There are numerous things you will be able to do with the app :
At the moment, this app is more of an idea than a real application and it will require a lot of time and efforts to be fully developed so any help is welcome. Feel free to contribute and contact us.
Neural Network is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.
Installing Neural Network is easy:
pip install nnabla
This installs the CPU version of Neural Network. GPU-acceleration can be added by installing the CUDA extension with pip install nnabla-ext-cuda.
For more details, see the installation section of the documentation.
See Build Manuals.
For details on running on Docker, see the installation section of the documentation.
The Python API built on the Neural Network C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
x = nn.Variable(<input_shape>)
t = nn.Variable(<target_shape>)
h = F.tanh(PF.affine(x, <hidden_size>, name='affine1'))
y = PF.affine(h, <target_size>, name='affine2')
loss = F.mean(F.softmax_cross_entropy(y, t))
Training can be done by:
import nnabla.solvers as S
# Create a solver (parameter updater)
solver = S.Adam(<solver_params>)
solver.set_parameters(nn.get_parameters())
# Training iteration
for n in range(<num_training_iterations>):
# Setting data from any data source
x.d = <set data>
t.d = <set label>
# Initialize gradients
solver.zero_grad()
# Forward and backward execution
loss.forward()
loss.backward()
# Update parameters by computed gradients
solver.update()
The dynamic computation graph enables flexible runtime network construction. Neural Network can use both paradigms of static and dynamic graphs, both using the same API.
x.d = <set data>
t.d = <set label>
drop_depth = np.random.rand(<num_stochastic_layers>) < <layer_drop_ratio>
with nn.auto_forward():
h = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), name='conv0'))
for i in range(<num_stochastic_layers>):
if drop_depth[i]:
continue # Stochastically drop a layer
h2 = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1),
name='conv%d' % (i + 1)))
h = F.add2(h, h2)
y = PF.affine(h, <target_size>, name='classification')
loss = F.mean(F.softmax_cross_entropy(y, t))
# Backward computation (can also be done in dynamically executed graph)
loss.backward()
Neural Network provides a command line utility nnabla_cli for easier use of NN.
nnabla_cli provides following functionality.
For more details see Documentation
https://nnabla.readthedocs.org
The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.
Newsletter is a self hosted newsletter application built on Node.js (v7+) and C++
Depending on how you have configured your system and Docker you may need to prepend the commands below with sudo.
For more information, please read the docs.
This app officially supports IRC and Slack but the team plans to support Slack Enterprise Grid in the future.
Install the GitHub integration for Slack. After you've signed in to your Slack workspace, you will be prompted to give the app access.
After the app is installed, and once you've added the GitHub integration to the relevant channels using /invite @github, you will see previews of links to GitHub issues, pull requests, and code rendered as rich text in your workspace. Follow this link for detailed guide.
You can customize your notifications by subscribing to activity that is relevant to your Slack channel, and unsubscribing from activity that is less helpful to your project.
Settings are configured with the /github slash command:
/github subscribe owner/repo [feature]
/github unsubscribe owner/repo [feature]
These are enabled by default, and can be disabled with the /github unsubscribe owner/repo [feature] command:
These are disabled by default, and can be enabled with the /github subscribe owner/repo [feature] command:
You can subscribe or unsubscribe from multiple settings at once. For example, to turn on activity for pull request reviews and comments:
/github subscribe owner/repo reviews comments
And to turn it back off:
/github unsubscribe owner/repo reviews comments
When you install the new GitHub integration for Slack in your Slack workspace, you'll be prompted to move over all of your existing subscriptions - so getting set up again is easy. As you enable individual subscriptions in the new app, your settings will be automatically migrated and subscriptions in the legacy app will be disabled.
Please fill out GitHub's Support form and your request will be routed to the right team at GitHub.
Want to help improve the integration between GitHub and Slack? Check out the contributing docs to get involved.
The project is available as open source under the terms of the MIT License.
When using the GitHub logos, be sure to follow the GitHub logo guidelines.
Control your Smart Bluetooth Bulb through the Web!
Read about this project in my blog post:
This project uses Web Bluetooth which is currently only available on Chrome for Android 6+, Mac OS X, Chrome OS (and also Linux, but behind a flag).
➡ Online Demo
➡ Improved version by Geraldo
A Python module for translating from english to pirate, built for teaching Python.
$ npm install --save pirate-speak
var pirateSpeak = require('pirate-speak');
var english = 'Cash rules everything around me C.R.E.A.M. get the money';
var pirate = pirateSpeak.translate(english);
// -> Coin rules everything around me C.R.E.A.M. get thar doubloons
var pirateSpeak = require('pirate-speak');
var english = 'Mama always said life was like a box of chocolates. You never know what you\'re gonna get.';
var pirate = pirateSpeak.translate(english);
// -> Mama always said life be like a barrel o' chocolates. Ye nary know what you're gonna get.
As he came into thar window
It be thar sound o' a crescendo
He came into her apartment
He port thar bloodstains on thar carpet
She ran underneath thar table
He could see she be unable
So she ran into thar bedroom
She be struck down, it be her doom
Annie, be ye ok?
So, Annie be ye ok
Be ye ok, Annie
Annie, be ye ok?
So, Annie be ye ok
Be ye ok, Annie
--- Captain Jackson
If me be yer boyfriend, I'd nary let ye sail
me can take ye places ye ain't nary been afore
Baby, take a chance or you'll nary ever know
me got doubloons in me hands that I'd verily like t' blow
Swag, swag, swag, on ye
Chillin' by thar fire while our jolly crew eatin' fondue
me don't know about me but me know about ye
So cry ahoy t' falsetto in three, two, swag
--- First Mate Kostulski
Part of the teaching exercise involved building a backend pirate API and then interacting with it from a frontend. A live version of the heroku app is available here.
You can check out the full license here
This project is licensed under the terms of the MIT license.
Service for handling authentication, data mesh, and module communication.
Build the image:
$ docker-compose -f docker-compose-dev.yml build
Run the web server:
$ docker-compose -f docker-compose-dev.yml up
Open your browser with URL http://localhost:8080. For the admin panel http://localhost:8080/admin (user: admin, password: admin).
This is an implementation of an SMS Messages app that you can use with a Twilio number.
This is a work in progress as there are plenty of features to add. Currently the application does the following:
Clone the repository and install dependencies:
$ git clone https://github.com/philnash/sms-app.git
$ cd sms-messages-app
$ npm install
Copy .env.example to .env and fill in the your Twilio account credentials and the Twilio number you want to use. You can find your Account SID and Auth Token in your Twilio account portal.
$ cp .env.example .env
Then start the application:
$ npm start
Navigate to http://localhost:3000 and you will see the application.
This simple shopping cart prototype shows how React components and Redux can be used to build a friendly user experience with instant visual updates and scaleable code in ecommerce applications.
Try playing with the code on Sandbox :)
/* First, Install the needed packages */
npm install
/* Then start both Node and React */
npm start
/* To run the tests */
npm run test
/* Running e2e tests */
npm run wdio
The MIT License (MIT). Please see License File for more information.
The Splunk app enables data analysts and IT administrators to import the data they need to get their organization more productive and finally makes Office 365 data available to Splunk
Note: Tested with Splunk Enterprise 6.0, 6.1 and 6.2
The project file has build events that pushes the projects file to a %SPLUNK_HOME% location, which you will need to create in your system environment variables
Task list is a simple web application to save your daily tasks in order not to miss anything.
https://januszch.github.io/Task-List/
You can transform your face using AI with Unhappy Face Remover (UFR)
UFR is an Android and iOS application.
This module is an unofficial wrapper to the AI system.
When I wrote this module I looked at https://github.com/t3hk0d3/ruby_ufr(thx)
```git clone git@github.com:veetaw/UFR-py.git ufr```
```cd ufr && python setup.py install```
watch example/test.py
This is an example of Video On Demand (VOD) playback.
Be sure to have previously recorded a broadcast using the Publish Record Example.
The playback format - either Flash or HLS - will be determined based on the extension with the following rules:
ExtensionFormatflvFlash/RTMPmp4Flash/RTMPm3u8HLS
Playing back a VOD file using the Red5 Pro Subscriber is similar to streaming a live video. Some configuration attributes will be different depending on the playback target.
To playback a VOD in the RTMP-based Subscriber:
With a configuration provided for the RTMP Subscriber:
{
protocol: 'rtmp',
host: 'localhost',
port: 1935,
app: 'live',
streamName: 'thefiletoplay.flv'
}
The Playback engine will connect to the server at rtmp://localhost:1935/ and attempt to play back the thefiletoplay.flv file located in <red5proserver>/webapps/live/streams.
To playback a VOD in the HLS-based Subscriber:
With a configuration provided for the HLS Subscriber:
{
protocol: 'http',
host: 'localhost',
port: 5080,
app: 'live',
streamName: 'thefiletoplay'
}
The Playback engine will connect to the server at http://localhost:5080/ and attempt to play back the thefiletoplay.m3u8 file located in <red5proserver>/webapps/live/streams.