Error 127
means one of two things:
$PATH
, or in this case, the relative path is correct -- remember that the current working directory for a random terminal might not be the same for the IDE you're using. it might be better to just use an absolute path instead.file -L
on /bin/sh
(to get your default/native format) and on the compiler itself (to see what format it is).if the problem is (2), then you can solve it in a few diff ways:
The original conio.h was implemented by Borland, so its not a part of the C Standard Library nor is defined by POSIX.
But here is an implementation for Linux that uses ncurses to do the job.
brew install imagemagick
Don't forget to install also gs
which is a dependency if you want to convert pdf to images for example :
brew install ghostscript
If you are on a 64bit build of ubuntu or debian (see e.g. 'cat /proc/version') you should simply use the 64bit cross compilers, if you cloned
git clone https://github.com/raspberrypi/tools
then the 64bit tools are in
tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64
use that directory for the gcc-toolchain. A useful tutorial for compiling that I followed is available here Building and compiling Raspberry PI Kernel (use the -x64 path from above as ${CCPREFIX})
To generate a shared library you need first to compile your C code with the -fPIC
(position independent code) flag.
gcc -c -fPIC hello.c -o hello.o
This will generate an object file (.o), now you take it and create the .so file:
gcc hello.o -shared -o libhello.so
EDIT: Suggestions from the comments:
You can use
gcc -shared -o libhello.so -fPIC hello.c
to do it in one step. – Jonathan Leffler
I also suggest to add -Wall
to get all warnings, and -g
to get debugging information, to your gcc
commands. – Basile Starynkevitch
For the benefit of anyone reading this later, you need to link against it as Fred said:
gcc fib.c -lm -o fibo
One good way to find out what library you need to link is by checking the man page if one exists. For example, man pow
and man floor
will both tell you:
Link with -lm
.
An explanation for linking math library in C programming - Linking in C
Another alternative would be to specify the list of libraries twice:
gcc prog.o libA.a libB.a libA.a libB.a -o prog.x
Doing this, you don't have to bother with the right sequence since the reference will be resolved in the second block.
Build system independent method
make SHELL='sh -x'
is another option. Sample Makefile
:
a:
@echo a
Output:
+ echo a
a
This sets the special SHELL
variable for make
, and -x
tells sh
to print the expanded line before executing it.
One advantage over -n
is that is actually runs the commands. I have found that for some projects (e.g. Linux kernel) that -n
may stop running much earlier than usual probably because of dependency problems.
One downside of this method is that you have to ensure that the shell that will be used is sh
, which is the default one used by Make as they are POSIX, but could be changed with the SHELL
make variable.
Doing sh -v
would be cool as well, but Dash 0.5.7 (Ubuntu 14.04 sh
) ignores for -c
commands (which seems to be how make
uses it) so it doesn't do anything.
make -p
will also interest you, which prints the values of set variables.
CMake generated Makefiles always support VERBOSE=1
As in:
mkdir build
cd build
cmake ..
make VERBOSE=1
Dedicated question at: Using CMake with GNU Make: How can I see the exact commands?
It is possible of course, use -l:
instead of -l
. For example -l:libXYZ.a
to link with libXYZ.a
. Notice the lib
written out, as opposed to -lXYZ
which would auto expand to libXYZ
.
If using getline
is an option - Not neglecting its security issues and if you wish to brace pointers - you can avoid string functions as the getline
returns the number of characters. Something like below
#include <stdio.h>
#include <stdlib.h>
int main()
{
char *fname, *lname;
size_t size = 32, nchar; // Max size of strings and number of characters read
fname = malloc(size * sizeof *fname);
lname = malloc(size * sizeof *lname);
if (NULL == fname || NULL == lname)
{
printf("Error in memory allocation.");
exit(1);
}
printf("Enter first name ");
nchar = getline(&fname, &size, stdin);
if (nchar == -1) // getline return -1 on failure to read a line.
{
printf("Line couldn't be read..");
// This if block could be repeated for next getline too
exit(1);
}
printf("Number of characters read :%zu\n", nchar);
fname[nchar - 1] = '\0';
printf("Enter last name ");
nchar = getline(&lname, &size, stdin);
printf("Number of characters read :%zu\n", nchar);
lname[nchar - 1] = '\0';
printf("Name entered %s %s\n", fname, lname);
return 0;
}
Note: The [ security issues ] with getline
shouldn't be neglected though.
export CFLAGS=-m32
A few words on "what is makeinfo" -- other answers cover "how do I get it" well.
The section "Creating an Info File" of the Texinfo manual states that
makeinfo
is a program that converts a Texinfo file into an Info file, HTML file, or plain text.
The Texinfo home page explains that Texinfo itself "is the official documentation format of the GNU project" and that it "uses a single source file to produce output in a number of formats, both online and printed (dvi, html, info, pdf, xml, etc.)".
To sum up: Texinfo is a documentation source file format and makeinfo
is the program that turns source files in Texinfo format into the desired output.
After a fair amount of work, I was able to get it to build on Ubuntu 12.04 x86 and Debian 7.4 x86_64. I wrote up a guide below. Can you please try following it to see if it resolves the issue?
If not please let me know where you get stuck.
Install Common Dependencies
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev
Install NumArray 1.5.2
wget http://goo.gl/6gL0q3 -O numarray-1.5.2.tgz
tar xfvz numarray-1.5.2.tgz
cd numarray-1.5.2
sudo python setup.py install
Install Numeric 23.8
wget http://goo.gl/PxaHFW -O numeric-23.8.tgz
tar xfvz numeric-23.8.tgz
cd Numeric-23.8
sudo python setup.py install
Install HDF5 1.6.5
wget ftp://ftp.hdfgroup.org/HDF5/releases/hdf5-1.6/hdf5-1.6.5.tar.gz
tar xfvz hdf5-1.6.5.tar.gz
cd hdf5-1.6.5
./configure --prefix=/usr/local
sudo make
sudo make install
Install Nanoengineer
git clone https://github.com/kanzure/nanoengineer.git
cd nanoengineer
./bootstrap
./configure
make
sudo make install
Troubleshooting
On Debian Jessie, you will receive the error message that cant pants mentioned. There seems to be an issue in the automake scripts. x86_64-linux-gnu-gcc
is inserted in CFLAGS
and gcc
will interpret that as a name of one of the source files. As a workaround, let's create an empty file with that name. Empty so that it won't change the program and that very name so that compiler picks it up. From the cloned nanoengineer directory, run this command to make gcc happy (it is a hack yes, but it does work) ...
touch sim/src/x86_64-linux-gnu-gcc
If you receive an error message when attemping to compile HDF5 along the lines of: "error: call to ‘__open_missing_mode’ declared with attribute error: open with O_CREAT in second argument needs 3 arguments", then modify the file perform/zip_perf.c, line 548 to look like the following and then rerun make...
output = open(filename, O_RDWR | O_CREAT, S_IRUSR|S_IWUSR);
If you receive an error message about Numeric/arrayobject.h not being found when building Nanoengineer, try running
export CPPFLAGS=-I/usr/local/include/python2.7
./configure
make
sudo make install
If you receive an error message similar to "TRACE_PREFIX undeclared", modify the file sim/src/simhelp.c lines 38 to 41 to look like this and re-run make:
#ifdef DISTUTILS
static char tracePrefix[] = "";
#else
static char tracePrefix[] = "";
If you receive an error message when trying to launch NanoEngineer-1 that mentions something similar to "cannot import name GL_ARRAY_BUFFER_ARB", modify the lines in the following files
/usr/local/bin/NanoEngineer1_0.9.2.app/program/graphics/drawing/setup_draw.py
/usr/local/bin/NanoEngineer1_0.9.2.app/program/graphics/drawing/GLPrimitiveBuffer.py
/usr/local/bin/NanoEngineer1_0.9.2.app/program/prototype/test_drawing.py
that look like this:
from OpenGL.GL import GL_ARRAY_BUFFER_ARB
from OpenGL.GL import GL_ELEMENT_ARRAY_BUFFER_ARB
to look like this:
from OpenGL.GL.ARB.vertex_buffer_object import GL_ARRAY_BUFFER_AR
from OpenGL.GL.ARB.vertex_buffer_object import GL_ELEMENT_ARRAY_BUFFER_ARB
I also found an additional troubleshooting text file that has been removed, but you can find it here
Neither <iostream>
nor <iostream.h>
are standard C header files. Your code is meant to be C++, where <iostream>
is a valid header. Use g++
(and a .cpp
file extension) for C++ code.
Alternatively, this program uses mostly constructs that are available in C anyway. It's easy enough to convert the entire program to compile using a C compiler. Simply remove #include <iostream>
and using namespace std;
, and replace cout << endl;
with putchar('\n');
... I advise compiling using C99 (eg. gcc -std=c99
)
It seems the problem is that /
is not floor operation.
int mod(int m, float n)
{
return m - floor(m/n)*n;
}
GCC 4.9 introduces a newer C++ ABI version than your system libstdc++ has, so you need to tell the loader to use this newer version of the library by adding that path to LD_LIBRARY_PATH
. Unfortunately, I cannot tell you straight off where the libstdc++ so for your GCC 4.9 installation is located, as this depends on how you configured GCC. So you need something in the style of:
export LD_LIBRARY_PATH=/home/user/lib/gcc-4.9.0/lib:/home/user/lib/boost_1_55_0/stage/lib:$LD_LIBRARY_PATH
Note the actual path may be different (there might be some subdirectory hidden under there, like `x86_64-unknown-linux-gnu/4.9.0´ or similar).
If you are not comfortable with bash, you can continue to work in a standard windows command (i.e. DOS) shell.
For this to work you must add C:\cygwin\bin (or your local alternative) to the Windows PATH variable.
With this done, you may: 1) Open a command (DOS) shell 2) Change the directory to the location of your code (c:, then cd path\to\file) 3) gcc myProgram.c -o myProgram
As mentioned in nik's response, the "Using Cygwin" documentation is a great place to learn more.
How do you disable the unused variable warnings coming out of gcc?
I'm getting errors out of boost on windows and I do not want to touch the boost code...
You visit Boost's Trac and file a bug report against Boost.
Your application is not responsible for clearing library warnings and errors. The library is responsible for clearing its own warnings and errors.
You need to use objcopy to separate the debug information:
objcopy --only-keep-debug "${tostripfile}" "${debugdir}/${debugfile}"
strip --strip-debug --strip-unneeded "${tostripfile}"
objcopy --add-gnu-debuglink="${debugdir}/${debugfile}" "${tostripfile}"
I use the bash script below to separate the debug information into files with a .debug extension in a .debug directory. This way I can tar the libraries and executables in one tar file and the .debug directories in another. If I want to add the debug info later on I simply extract the debug tar file and voila I have symbolic debug information.
This is the bash script:
#!/bin/bash
scriptdir=`dirname ${0}`
scriptdir=`(cd ${scriptdir}; pwd)`
scriptname=`basename ${0}`
set -e
function errorexit()
{
errorcode=${1}
shift
echo $@
exit ${errorcode}
}
function usage()
{
echo "USAGE ${scriptname} <tostrip>"
}
tostripdir=`dirname "$1"`
tostripfile=`basename "$1"`
if [ -z ${tostripfile} ] ; then
usage
errorexit 0 "tostrip must be specified"
fi
cd "${tostripdir}"
debugdir=.debug
debugfile="${tostripfile}.debug"
if [ ! -d "${debugdir}" ] ; then
echo "creating dir ${tostripdir}/${debugdir}"
mkdir -p "${debugdir}"
fi
echo "stripping ${tostripfile}, putting debug info into ${debugfile}"
objcopy --only-keep-debug "${tostripfile}" "${debugdir}/${debugfile}"
strip --strip-debug --strip-unneeded "${tostripfile}"
objcopy --add-gnu-debuglink="${debugdir}/${debugfile}" "${tostripfile}"
chmod -x "${debugdir}/${debugfile}"
(The following is a very artificial example cooked up to illustrate.) One major use of packed structs is where you have a stream of data (say 256 bytes) to which you wish to supply meaning. If I take a smaller example, suppose I have a program running on my Arduino which sends via serial a packet of 16 bytes which have the following meaning:
0: message type (1 byte)
1: target address, MSB
2: target address, LSB
3: data (chars)
...
F: checksum (1 byte)
Then I can declare something like
typedef struct {
uint8_t msgType;
uint16_t targetAddr; // may have to bswap
uint8_t data[12];
uint8_t checksum;
} __attribute__((packed)) myStruct;
and then I can refer to the targetAddr bytes via aStruct.targetAddr rather than fiddling with pointer arithmetic.
Now with alignment stuff happening, taking a void* pointer in memory to the received data and casting it to a myStruct* will not work unless the compiler treats the struct as packed (that is, it stores data in the order specified and uses exactly 16 bytes for this example). There are performance penalties for unaligned reads, so using packed structs for data your program is actively working with is not necessarily a good idea. But when your program is supplied with a list of bytes, packed structs make it easier to write programs which access the contents.
Otherwise you end up using C++ and writing a class with accessor methods and stuff that does pointer arithmetic behind the scenes. In short, packed structs are for dealing efficiently with packed data, and packed data may be what your program is given to work with. For the most part, you code should read values out of the structure, work with them, and write them back when done. All else should be done outside the packed structure. Part of the problem is the low level stuff that C tries to hide from the programmer, and the hoop jumping that is needed if such things really do matter to the programmer. (You almost need a different 'data layout' construct in the language so that you can say 'this thing is 48 bytes long, foo refers to the data 13 bytes in, and should be interpreted thus'; and a separate structured data construct, where you say 'I want a structure containing two ints, called alice and bob, and a float called carol, and I don't care how you implement it' -- in C both these use cases are shoehorned into the struct construct.)
Instead of manipulating the CMAKE_CXX_FLAGS
strings directly (which could be done more nicely using string(APPEND CMAKE_CXX_FLAGS_DEBUG " -g3")
btw), you can use add_compiler_options
:
add_compile_options(
"-Wall" "-Wpedantic" "-Wextra" "-fexceptions"
"$<$<CONFIG:DEBUG>:-O0;-g3;-ggdb>"
)
This would add the specified warnings to all build types, but only the given debugging flags to the DEBUG
build. Note that compile options are stored as a CMake list, which is just a string separating its elements by semicolons ;
.
this works for me, sudo apt-get install libx11-dev
I use both because sometimes they give different, useful error messages.
The Python project was able to find and fix a number of small buglets when one of the core developers first tried compiling with clang.
char *arr; above statement implies that arr is a character pointer and it can point to either one character or strings of character
& char arr[]; above statement implies that arr is strings of character and can store as many characters as possible or even one but will always count on '\0' character hence making it a string ( e.g. char arr[]= "a" is similar to char arr[]={'a','\0'} )
But when used as parameters in called function, the string passed is stored character by character in formal arguments making no difference.
(This answer addresses the C++ side of things, but the sign extension problem exists in C too.)
Handling all three char
types (signed
, unsigned
, and char
) is more delicate than it first appears. Values in the range 0 to SCHAR_MAX
(which is 127 for an 8-bit char
) are easy:
char c = somevalue;
signed char sc = c;
unsigned char uc = c;
int n = c;
But, when somevalue
is outside of that range, only going through unsigned char
gives you consistent results for the "same" char
values in all three types:
char c = somevalue;
signed char sc = c;
unsigned char uc = c;
// Might not be true: int(c) == int(sc) and int(c) == int(uc).
int nc = (unsigned char)c;
int nsc = (unsigned char)sc;
int nuc = (unsigned char)uc;
// Always true: nc == nsc and nc == nuc.
This is important when using functions from ctype.h, such as isupper
or toupper
, because of sign extension:
char c = negative_char; // Assuming CHAR_MIN < 0.
int n = c;
bool b = isupper(n); // Undefined behavior.
Note the conversion through int is implicit; this has the same UB:
char c = negative_char;
bool b = isupper(c);
To fix this, go through unsigned char
, which is easily done by wrapping ctype.h functions through safe_ctype:
template<int (&F)(int)>
int safe_ctype(unsigned char c) { return F(c); }
//...
char c = CHAR_MIN;
bool b = safe_ctype<isupper>(c); // No UB.
std::string s = "value that may contain negative chars; e.g. user input";
std::transform(s.begin(), s.end(), s.begin(), &safe_ctype<toupper>);
// Must wrap toupper to eliminate UB in this case, you can't cast
// to unsigned char because the function is called inside transform.
This works because any function taking any of the three char types can also take the other two char types. It leads to two functions which can handle any of the types:
int ord(char c) { return (unsigned char)c; }
char chr(int n) {
assert(0 <= n); // Or other error-/sanity-checking.
assert(n <= UCHAR_MAX);
return (unsigned char)n;
}
// Ord and chr are named to match similar functions in other languages
// and libraries.
ord(c)
always gives you a non-negative value – even when passed a negative char
or negative signed char
– and chr
takes any value ord
produces and gives back the exact same char
.
In practice, I would probably just cast through unsigned char
instead of using these, but they do succinctly wrap the cast, provide a convenient place to add error checking for int
-to-char
, and would be shorter and more clear when you need to use them several times in close proximity.
Instead of using the new pragmas, you can also use __attribute__((optimize("O0")))
for your needs. This has the advantage of just applying to a single function and not all functions defined in the same file.
Usage example:
void __attribute__((optimize("O0"))) foo(unsigned char data) {
// unmodifiable compiler code
}
Somewhere in your code there is a line #include <string>
. This by itself tells you that the program is written in C++. So using g++
is better than gcc
.
For the missing library: you should look around in the file system if you can find a file called libl.so
. Use the locate
command, try /usr/lib
, /usr/local/lib
, /opt/flex/lib
, or use the brute-force find / | grep /libl
.
Once you have found the file, you have to add the directory to the compiler command line, for example:
g++ -o scan lex.yy.c -L/opt/flex/lib -ll
The attribute packed
means that the compiler will not add padding between fields of the struct
. Padding is usually used to make fields aligned to their natural size, because some architectures impose penalties for unaligned access or don't allow it at all.
aligned(4)
means that the struct should be aligned to an address that is divisible by 4.
You could also use ld
option -Bdynamic
gcc <objectfiles> -static -lstatic1 -lstatic2 -Wl,-Bdynamic -ldynamic1 -ldynamic2
All libraries after it (including system ones linked by gcc automatically) will be linked dynamically.
I had the same problem. I installed "XCode: development tools" from the app store and it fixed the problem for me.
I think this link will help: https://itunes.apple.com/us/app/xcode/id497799835?mt=12&ls=1
Credit to Yann Ramin for his advice. I think there is a better solution with links, but this was easy and fast.
Good luck!
The man page makes it pretty clear. If you want to pass two arguments (-rpath
and .
) to the linker you can write
-Wl,-rpath,.
or alternatively
-Wl,-rpath -Wl,.
The arguments -Wl,-rpath .
you suggested do NOT make sense to my mind. How is gcc supposed to know that your second argument (.
) is supposed to be passed to the linker instead of being interpreted normally? The only way it would be able to know that is if it had insider knowledge of all possible linker arguments so it knew that -rpath
required an argument after it.
I solved this on 12.10 by installing libssl-dev.
sudo apt-get install libssl-dev
Unfortunately that approach is not portable C++ (so far).
All standard names are in namespace std
and moreover you cannot know which names are NOT defined by including and header (in other words it's perfectly legal for an implementation to declare the name std::string
directly or indirectly when using #include <vector>
).
Despite this however you are required by the language to know and tell the compiler which standard header includes which part of the standard library. This is a source of portability bugs because if you forget for example #include <map>
but use std::map
it's possible that the program compiles anyway silently and without warnings on a specific version of a specific compiler, and you may get errors only later when porting to another compiler or version.
In my opinion there are no valid technical excuses because this is necessary for the general user: the compiler binary could have all standard namespace built in and this could actually increase the performance even more than precompiled headers (e.g. using perfect hashing for lookups, removing standard headers parsing or loading/demarshalling and so on).
The use of standard headers simplifies the life of who builds compilers or standard libraries and that's all. It's not something to help users.
However this is the way the language is defined and you need to know which header defines which names so plan for some extra neurons to be burnt in pointless configurations to remember that (or try to find and IDE that automatically adds the standard headers you use and removes the ones you don't... a reasonable alternative).
This is now in the GCC wiki FAQ, see http://gcc.gnu.org/wiki/FAQ#gnu_stubs-32.h
Should it be LIBRARY_PATH
instead of LD_LIBRARY_PATH
.
gcc checks for LIBRARY_PATH
which can be seen with -v
option
So my question is: Is there a way to tell the compiler that a long long int is the also a int64_t, just like long int is?
This is a good question or problem, but I suspect the answer is NO.
Also, a long int
may not be a long long int
.
# if __WORDSIZE == 64 typedef long int int64_t; # else __extension__ typedef long long int int64_t; # endif
I believe this is libc. I suspect you want to go deeper.
In both 32-bit compile with GCC (and with 32- and 64-bit MSVC), the output of the program will be:
int: 0 int64_t: 1 long int: 0 long long int: 1
32-bit Linux uses the ILP32 data model. Integers, longs and pointers are 32-bit. The 64-bit type is a long long
.
Microsoft documents the ranges at Data Type Ranges. The say the long long
is equivalent to __int64
.
However, the program resulting from a 64-bit GCC compile will output:
int: 0 int64_t: 1 long int: 1 long long int: 0
64-bit Linux uses the LP64
data model. Longs are 64-bit and long long
are 64-bit. As with 32-bit, Microsoft documents the ranges at Data Type Ranges and long long is still __int64
.
There's a ILP64
data model where everything is 64-bit. You have to do some extra work to get a definition for your word32
type. Also see papers like 64-Bit Programming Models: Why LP64?
But this is horribly hackish and does not scale well (actual functions of substance, uint64_t, etc)...
Yeah, it gets even better. GCC mixes and matches declarations that are supposed to take 64 bit types, so its easy to get into trouble even though you follow a particular data model. For example, the following causes a compile error and tells you to use -fpermissive
:
#if __LP64__
typedef unsigned long word64;
#else
typedef unsigned long long word64;
#endif
// intel definition of rdrand64_step (http://software.intel.com/en-us/node/523864)
// extern int _rdrand64_step(unsigned __int64 *random_val);
// Try it:
word64 val;
int res = rdrand64_step(&val);
It results in:
error: invalid conversion from `word64* {aka long unsigned int*}' to `long long unsigned int*'
So, ignore LP64
and change it to:
typedef unsigned long long word64;
Then, wander over to a 64-bit ARM IoT gadget that defines LP64
and use NEON:
error: invalid conversion from `word64* {aka long long unsigned int*}' to `uint64_t*'
You should include <string.h>
(or its C++ equivalent, <cstring>
).
On CentOS or Fedora
yum install gcc-c++
NULL
isn't a keyword; it's a macro substitution for 0, and comes in stddef.h
or cstddef
, I believe. You haven't #included
an appropriate header file, so g++ sees NULL
as a regular variable name, and you haven't declared it.
C99 has it in stdbool.h, but in C90 it must be defined as a typedef or enum:
typedef int bool;
#define TRUE 1
#define FALSE 0
bool f = FALSE;
if (f) { ... }
Alternatively:
typedef enum { FALSE, TRUE } boolean;
boolean b = FALSE;
if (b) { ... }
You can use the centos-sclo-rh-testing repo to install GCC v7 without having to compile it forever, also enable V7 by default and let you switch between different versions if required.
sudo yum install -y yum-utils centos-release-scl;
sudo yum -y --enablerepo=centos-sclo-rh-testing install devtoolset-7-gcc;
echo "source /opt/rh/devtoolset-7/enable" | sudo tee -a /etc/profile;
source /opt/rh/devtoolset-7/enable;
gcc --version;
LLVM (used to mean "Low Level Virtual Machine" but not anymore) is a compiler infrastructure, written in C++, which is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs written in arbitrary programming languages. Originally implemented for C/C++, the language-independent design (and the success) of LLVM has since spawned a wide variety of front-ends, including Objective C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL, and others.
Read this for more explanation
Also check out Unladen Swallow
A common case for simply setting -fpermissive and not sweating it exists: the thoroughly-tested and working third-party library that won't compile on newer compiler versions without -fpermissive. These libraries exist, and are very likely not the application developer's problem to solve, nor in the developer's schedule budget to do it.
Set -fpermissive and move on in that case.
Just wanted to add that if you want to debug stuff, you should compile with debug information before you debug, otherwise the debugger won't work. So, in g++ you need to do g++ -g source.cpp
. The -g
flag means that the compiler will insert debugging information into your executable, so that you can run gdb on it.
In gcc, you can label the parameter with the unused
attribute.
This attribute, attached to a variable, means that the variable is meant to be possibly unused. GCC will not produce a warning for this variable.
In practice this is accomplished by putting __attribute__ ((unused))
just before the parameter. For example:
void foo(workerid_t workerId) { }
becomes
void foo(__attribute__((unused)) workerid_t workerId) { }
Interesting, I didn't know make would default to using the C compiler given rules regarding source files.
Anyway, a simple solution that demonstrates simple Makefile concepts would be:
HEADERS = program.h headers.h
default: program
program.o: program.c $(HEADERS)
gcc -c program.c -o program.o
program: program.o
gcc program.o -o program
clean:
-rm -f program.o
-rm -f program
(bear in mind that make requires tab instead of space indentation, so be sure to fix that when copying)
However, to support more C files, you'd have to make new rules for each of them. Thus, to improve:
HEADERS = program.h headers.h
OBJECTS = program.o
default: program
%.o: %.c $(HEADERS)
gcc -c $< -o $@
program: $(OBJECTS)
gcc $(OBJECTS) -o $@
clean:
-rm -f $(OBJECTS)
-rm -f program
I tried to make this as simple as possible by omitting variables like $(CC) and $(CFLAGS) that are usually seen in makefiles. If you're interested in figuring that out, I hope I've given you a good start on that.
Here's the Makefile I like to use for C source. Feel free to use it:
TARGET = prog
LIBS = -lm
CC = gcc
CFLAGS = -g -Wall
.PHONY: default all clean
default: $(TARGET)
all: default
OBJECTS = $(patsubst %.c, %.o, $(wildcard *.c))
HEADERS = $(wildcard *.h)
%.o: %.c $(HEADERS)
$(CC) $(CFLAGS) -c $< -o $@
.PRECIOUS: $(TARGET) $(OBJECTS)
$(TARGET): $(OBJECTS)
$(CC) $(OBJECTS) -Wall $(LIBS) -o $@
clean:
-rm -f *.o
-rm -f $(TARGET)
It uses the wildcard and patsubst features of the make utility to automatically include .c and .h files in the current directory, meaning when you add new code files to your directory, you won't have to update the Makefile. However, if you want to change the name of the generated executable, libraries, or compiler flags, you can just modify the variables.
In either case, don't use autoconf, please. I'm begging you! :)
For CentOS 7:
sudo yum install python36u-devel
I followed the instructions here for installing python3.6 on several VMs: https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-centos-7 and was then able to build mod_wsgi and get it working with a python3.6 virtualenv
So, the way the constructors and destructors work is that the shared object file contains special sections (.ctors and .dtors on ELF) which contain references to the functions marked with the constructor and destructor attributes, respectively. When the library is loaded/unloaded the dynamic loader program (ld.so or somesuch) checks whether such sections exist, and if so, calls the functions referenced therein.
Come to think of it, there is probably some similar magic in the normal static linker so that the same code is run on startup/shutdown regardless if the user chooses static or dynamic linking.
In C99 the length modifier for long double
seems to be L
and not l
. man fprintf
(or equivalent for windows) should tell you for your particular platform.
Let's decompile to see what GCC 4.8 does with it
Without __builtin_expect
#include "stdio.h"
#include "time.h"
int main() {
/* Use time to prevent it from being optimized away. */
int i = !time(NULL);
if (i)
printf("%d\n", i);
puts("a");
return 0;
}
Compile and decompile with GCC 4.8.2 x86_64 Linux:
gcc -c -O3 -std=gnu11 main.c
objdump -dr main.o
Output:
0000000000000000 <main>:
0: 48 83 ec 08 sub $0x8,%rsp
4: 31 ff xor %edi,%edi
6: e8 00 00 00 00 callq b <main+0xb>
7: R_X86_64_PC32 time-0x4
b: 48 85 c0 test %rax,%rax
e: 75 14 jne 24 <main+0x24>
10: ba 01 00 00 00 mov $0x1,%edx
15: be 00 00 00 00 mov $0x0,%esi
16: R_X86_64_32 .rodata.str1.1
1a: bf 01 00 00 00 mov $0x1,%edi
1f: e8 00 00 00 00 callq 24 <main+0x24>
20: R_X86_64_PC32 __printf_chk-0x4
24: bf 00 00 00 00 mov $0x0,%edi
25: R_X86_64_32 .rodata.str1.1+0x4
29: e8 00 00 00 00 callq 2e <main+0x2e>
2a: R_X86_64_PC32 puts-0x4
2e: 31 c0 xor %eax,%eax
30: 48 83 c4 08 add $0x8,%rsp
34: c3 retq
The instruction order in memory was unchanged: first the printf
and then puts
and the retq
return.
With __builtin_expect
Now replace if (i)
with:
if (__builtin_expect(i, 0))
and we get:
0000000000000000 <main>:
0: 48 83 ec 08 sub $0x8,%rsp
4: 31 ff xor %edi,%edi
6: e8 00 00 00 00 callq b <main+0xb>
7: R_X86_64_PC32 time-0x4
b: 48 85 c0 test %rax,%rax
e: 74 11 je 21 <main+0x21>
10: bf 00 00 00 00 mov $0x0,%edi
11: R_X86_64_32 .rodata.str1.1+0x4
15: e8 00 00 00 00 callq 1a <main+0x1a>
16: R_X86_64_PC32 puts-0x4
1a: 31 c0 xor %eax,%eax
1c: 48 83 c4 08 add $0x8,%rsp
20: c3 retq
21: ba 01 00 00 00 mov $0x1,%edx
26: be 00 00 00 00 mov $0x0,%esi
27: R_X86_64_32 .rodata.str1.1
2b: bf 01 00 00 00 mov $0x1,%edi
30: e8 00 00 00 00 callq 35 <main+0x35>
31: R_X86_64_PC32 __printf_chk-0x4
35: eb d9 jmp 10 <main+0x10>
The printf
(compiled to __printf_chk
) was moved to the very end of the function, after puts
and the return to improve branch prediction as mentioned by other answers.
So it is basically the same as:
int main() {
int i = !time(NULL);
if (i)
goto printf;
puts:
puts("a");
return 0;
printf:
printf("%d\n", i);
goto puts;
}
This optimization was not done with -O0
.
But good luck on writing an example that runs faster with __builtin_expect
than without, CPUs are really smart these days. My naive attempts are here.
C++20 [[likely]]
and [[unlikely]]
C++20 has standardized those C++ built-ins: How to use C++20's likely/unlikely attribute in if-else statement They will likely (a pun!) do the same thing.
Make sure that the -L
option appears ahead of the -l
option; the order of options in linker command lines does matter, especially with static libraries. The -L
option specifies a directory to be searched for libraries (static or shared). The -lname
option specifies a library which is with libmine.a
(static) or libmine.so
(shared on most variants of Unix, but Mac OS X uses .dylib
and HP-UX used to use .sl
). Conventionally, a static library will be in a file libmine.a
. This is convention, not mandatory, but if the name is not in the libmine.a
format, you cannot use the -lmine
notation to find it; you must list it explicitly on the compiler (linker) command line.
The -L./libmine
option says "there is a sub-directory called libmine
which can be searched to find libraries". I can see three possibilities:
libmine.a
, in which case you also need to add -lmine
to the linker line (after the object files that reference the library).libmine
that is a static archive, in which case you simply list it as a file ./libmine
with no -L
in front. libmine.a
in the current directory that you want to pick up. You can either write ./libmine.a
or -L . -lmine
and both should find the library.Are you mixing C and C++? One issue that can occur is that the declarations in the .h
file for a .c
file need to be surrounded by:
#if defined(__cplusplus)
extern "C" { // Make sure we have C-declarations in C++ programs
#endif
and:
#if defined(__cplusplus)
}
#endif
Note: if unable / unwilling to modify the .h
file(s) in question, you can surround their inclusion with extern "C"
:
extern "C" {
#include <abc.h>
} //extern
Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.
That's a good problem. In order to solve that problem you will also have to disable ASLR otherwise the address of g() will be unpredictable.
Disable ASLR:
sudo bash -c 'echo 0 > /proc/sys/kernel/randomize_va_space'
Disable canaries:
gcc overflow.c -o overflow -fno-stack-protector
After canaries and ASLR are disabled it should be a straight forward attack like the ones described in Smashing the Stack for Fun and Profit
Here is a list of security features used in ubuntu: https://wiki.ubuntu.com/Security/Features You don't have to worry about NX bits, the address of g() will always be in a executable region of memory because it is within the TEXT memory segment. NX bits only come into play if you are trying to execute shellcode on the stack or heap, which is not required for this assignment.
Now go and clobber that EIP!
Look at:
CMAKE_EXE_LINKER_FLAGS
CMAKE_MODULE_LINKER_FLAGS
CMAKE_SHARED_LINKER_FLAGS
CMAKE_STATIC_LINKER_FLAGS
you define the struct as xyx but you're trying to create the struct called xyz.
For CUDA 6.5 (and apparently 7.0 and 7.5), I've created a version of the gcc 4.8.5 RPM package (under Fedora Core 30) that allows that version of gcc to be install alongside your system's current GCC.
You can find all of that information here.
Each compiler has its own libexec/ directory. Normally libexec directory contains small helper programs called by other programs. In this case, gcc is looking for its own 'cc1' compiler. Your machine may contains different versions of gcc, and each version should have its own 'cc1'. Normally these compilers are located on:
/usr/local/libexec/gcc/<architecture>/<compiler>/<compiler_version>/cc1
Similar path for g++. Above error means, that the current gcc version used is not able to find its own 'cc1' compiler. This normally points to a PATH issue.
It appears this can be done. I'm unable to determine the version of GCC that it was added, but it was sometime before June 2010.
Here's an example:
#pragma GCC diagnostic error "-Wuninitialized"
foo(a); /* error is given for this one */
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wuninitialized"
foo(b); /* no diagnostic for this one */
#pragma GCC diagnostic pop
foo(c); /* error is given for this one */
#pragma GCC diagnostic pop
foo(d); /* depends on command line options */
use
path_to_exe >> log_file
to see the output of your command also errors can be redirected with
path_to_exe &> log_file
also you can use
crontab -l
to check if your edits were saved.
You have declared a function as nonstatic in some file and you have implemented as static in another file or somewhere in the same file can cause this problem also. For example, the following code will produce this error.
void inlet_update_my_ratio(object_t *myobject);
//some where the implementation is like this
static void inlet_update_my_ratio(object_t *myobject) {
//code
}
If you remove the static from the implementation, the error will go away as below.
void inlet_update_my_ratio(object_t *myobject) {
//code
}
I had a similar warning/error/failure when I was simply trying to make an executable from two different object files (main.o and add.o). I was using the command:
gcc -o exec main.o add.o
But my program is a C++ program. Using the g++
compiler solved my issue:
g++ -o exec main.o add.o
I was always under the impression that gcc
could figure these things out on its own. Apparently not. I hope this helps someone else searching for this error.
make DESTDIR=./new/customized/path install
This quick command worked for me for opencv release 3.2.0 installation on Ubuntu 16. DESTDIR path can be relative as well as absolute.
Such redirection can also be useful in case user does not have admin privileges as long as DESTDIR location has right access for the user. e.g /home//
I got the same thing. Running "make" and it fails with just this message.
% make
make: *** [all] Error 1
This was caused by a command in a rule terminates with non-zero exit status. E.g. imagine the following (stupid) Makefile
:
all:
@false
echo "hello"
This would fail (without printing "hello") with the above message since false
terminates with exit status 1.
In my case, I was trying to be clever and make a backup of a file before processing it (so that I could compare the newly generated file with my previous one). I did this by having a in my Make
rule that looked like this:
@[ -e $@ ] && mv $@ [email protected]
...not realizing that if the target file does not exist, then the above construction will exit (without running the mv
command) with exit status 1, and thus any subsequent commands in that rule failed to run. Rewriting my faulty line to:
@if [ -e $@ ]; then mv $@ [email protected]; fi
Solved my problem.
One possibility which has not been mentioned so far is that you might not be editing the file you think you are. i.e. your editor might have a different cwd than you had in mind.
Run 'more' on the file you're compiling to double check that it does indeed have the contents you hope it does. Hope that helps!
Note that it is now possible to use some of C++11 std::thread in the win32 threading mode. These header-only adapters worked out of the box for me: https://github.com/meganz/mingw-std-threads
From the revision history it looks like there is some recent attempt to make this a part of the mingw64 runtime.
Since uintptr_t
is not guaranteed to be there in C++/C++11, if this is a one way conversion you can consider uintmax_t
, always defined in <cstdint>
.
auto real_param = reinterpret_cast<uintmax_t>(param);
To play safe, one could add anywhere in the code an assertion:
static_assert(sizeof (uintmax_t) >= sizeof (void *) ,
"No suitable integer type for conversion from pointer type");
If you just want to experiment, there's an Objective-C compiler for .NET (Windows) here: qckapp
To test without copy elision and see you copy/move constructors/operators in action add "-fno-elide-constructors".
Even with no optimizations (-O0 ), GCC and Clang will still do copy elision, which has the effect of skipping copy/move constructors in some cases. See this question for the details about copy elision.
However, in Clang 3.4 it does trigger a bug (an invalid temporary object without calling constructor), which is fixed in 3.5.
If you want to find out how to set-up a non-native cross compile, I found this useful:
On the target machine,
% gcc -march=native -Q --help=target | grep march
-march= core-avx-i
Then use this on the build machine:
% gcc -march=core-avx-i ...
Others have answered sufficiently. I would just like to add a few examples of frequent extensions:
The main
function returning void
. This is not defined by the standard, meaning it will only work on some compilers (including GCC), but not on others. By the way, int main()
and int main(int, char**)
are the two signatures that the standard does define.
Another popular extension is being able to declare and define functions inside other functions:
void f()
{
void g()
{
// ...
}
// ...
g();
// ...
}
This is nonstandard. If you want this kind of behavior, check out C++11 lambdas
I had the same issue with mind. I tried using sudo apt-get install build-essential It still won't work. I simply created a hardlink to the gcc-x binary in the /usr/bin/ folder. sudo ls /usr/bin/gcc-x /usr/bin/gcc
That worked for me!
Firstly, in general:
If these .h
files are indeed typical C-style header files (as opposed to being something completely different that just happens to be named with .h
extension), then no, there's no reason to "compile" these header files independently. Header files are intended to be included into implementation files, not fed to the compiler as independent translation units.
Since a typical header file usually contains only declarations that can be safely repeated in each translation unit, it is perfectly expected that "compiling" a header file will have no harmful consequences. But at the same time it will not achieve anything useful.
Basically, compiling hello.h
as a standalone translation unit equivalent to creating a degenerate dummy.c
file consisting only of #include "hello.h"
directive, and feeding that dummy.c
file to the compiler. It will compile, but it will serve no meaningful purpose.
Secondly, specifically for GCC:
Many compilers will treat files differently depending on the file name extension. GCC has special treatment for files with .h
extension when they are supplied to the compiler as command-line arguments. Instead of treating it as a regular translation unit, GCC creates a precompiled header file for that .h
file.
You can read about it here: http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html
So, this is the reason you might see .h
files being fed directly to GCC.
Briefly, the error means that you can't use a static library to be linked w/ a dynamic one.
The correct way is to have a libavcodec
compiled into a .so
instead of .a
, so the other .so
library you are trying to build will link well.
The shortest way to do so is to add --enable-shared
at ./configure
options. Or even you may try to disable shared (or static) libraries at all... you choose what is suitable for you!
This is the great description and step-by-step instruction how to create and manage master and slave (gcc and g++) alternatives.
Shortly it's:
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.6
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.7
sudo update-alternatives --config gcc
First execute this
sudo apt-get install gcc binutils make linux-source
Then run again
/usr/bin/vmware-config-tools.pl
This is all you need to do. Now your system has the gcc make and the linux kernel sources.
This rule
main: producer.o consumer.o AddRemove.o
$(COMPILER) -pthread $(CCFLAGS) -o producer.o consumer.o AddRemove.o
is wrong. It says to create a file named producer.o (with -o producer.o
), but you want to create a file named main
. Please excuse the shouting, but ALWAYS USE $@ TO REFERENCE THE TARGET:
main: producer.o consumer.o AddRemove.o
$(COMPILER) -pthread $(CCFLAGS) -o $@ producer.o consumer.o AddRemove.o
As Shahbaz rightly points out, the gmake professionals would also use $^
which expands to all the prerequisites in the rule. In general, if you find yourself repeating a string or name, you're doing it wrong and should use a variable, whether one of the built-ins or one you create.
main: producer.o consumer.o AddRemove.o
$(COMPILER) -pthread $(CCFLAGS) -o $@ $^
It looks like the object file was compiled on a 64-bit toolchain, and you're using a 32-bit toolchain. Have you tried recompiling the object file in 32-bit mode?
You are correct in that glibc uses symbol versioning. If you are curious, the symbol versioning implementation introduced in glibc 2.1 is described here and is an extension of Sun's symbol versioning scheme described here.
One option is to statically link your binary. This is probably the easiest option.
You could also build your binary in a chroot build environment, or using a glibc-new => glibc-old cross-compiler.
According to the http://www.trevorpounds.com blog post Linking to Older Versioned Symbols (glibc), it is possible to to force any symbol to be linked against an older one so long as it is valid by using the same .symver
pseudo-op that is used for defining versioned symbols in the first place. The following example is excerpted from the blog post.
The following example makes use of glibc’s realpath, but makes sure it is linked against an older 2.2.5 version.
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
__asm__(".symver realpath,realpath@GLIBC_2.2.5");
int main()
{
const char* unresolved = "/lib64";
char resolved[PATH_MAX+1];
if(!realpath(unresolved, resolved))
{ return 1; }
printf("%s\n", resolved);
return 0;
}
As you noticed, these are Makefile {macros or variables}, not compiler options. They implement a set of conventions. (Macros is an old name for them, still used by some. GNU make doc calls them variables.)
The only reason that the names matter is the default make rules, visible via make -p
, which use some of them.
If you write all your own rules, you get to pick all your own macro names.
In a vanilla gnu make, there's no such thing as CCFLAGS. There are CFLAGS
, CPPFLAGS
, and CXXFLAGS
. CFLAGS
for the C compiler, CXXFLAGS
for C++, and CPPFLAGS
for both.
Why is CPPFLAGS
in both? Conventionally, it's the home of preprocessor flags (-D
, -U
) and both c and c++ use them. Now, the assumption that everyone wants the same define environment for c and c++ is perhaps questionable, but traditional.
P.S. As noted by James Moore, some projects use CPPFLAGS for flags to the C++ compiler, not flags to the C preprocessor. The Android NDK, for one huge example.
I wanted to ask the same question. Here is my current solution to obtain a string like this: 2013-02-07 09:24:40.749355372
I am not sure if there is a more straight forward solution than this, but at least the string format is freely configurable with this approach.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#define NANO 1000000000L
// buf needs to store 30 characters
int timespec2str(char *buf, uint len, struct timespec *ts) {
int ret;
struct tm t;
tzset();
if (localtime_r(&(ts->tv_sec), &t) == NULL)
return 1;
ret = strftime(buf, len, "%F %T", &t);
if (ret == 0)
return 2;
len -= ret - 1;
ret = snprintf(&buf[strlen(buf)], len, ".%09ld", ts->tv_nsec);
if (ret >= len)
return 3;
return 0;
}
int main(int argc, char **argv) {
clockid_t clk_id = CLOCK_REALTIME;
const uint TIME_FMT = strlen("2012-12-31 12:59:59.123456789") + 1;
char timestr[TIME_FMT];
struct timespec ts, res;
clock_getres(clk_id, &res);
clock_gettime(clk_id, &ts);
if (timespec2str(timestr, sizeof(timestr), &ts) != 0) {
printf("timespec2str failed!\n");
return EXIT_FAILURE;
} else {
unsigned long resol = res.tv_sec * NANO + res.tv_nsec;
printf("CLOCK_REALTIME: res=%ld ns, time=%s\n", resol, timestr);
return EXIT_SUCCESS;
}
}
output:
gcc mwe.c -lrt
$ ./a.out
CLOCK_REALTIME: res=1 ns, time=2013-02-07 13:41:17.994326501
Since there's no mention of how to compile a .c file together with a bunch of .o files, and this comment asks for it:
where's the main.c in this answer? :/ if file1.c is the main, how do you link it with other already compiled .o files? – Tom Brito Oct 12 '14 at 19:45
$ gcc main.c lib_obj1.o lib_obj2.o lib_objN.o -o x0rbin
Here, main.c is the C file with the main() function and the object files (*.o) are precompiled. GCC knows how to handle these together, and invokes the linker accordingly and results in a final executable, which in our case is x0rbin.
You will be able to use functions not defined in the main.c but using an extern reference to functions defined in the object files (*.o).
You can also link with .obj or other extensions if the object files have the correct format (such as COFF).
In addition, gcc will look in the directories specified after the -I
option.
I met some problems in Clion and finally, I solved them. Here is some experience.
.a
files are static libraries typically generated by the archive tool. You usually include the header files associated with that static library and then link to the library when you are compiling.
Use the -S (note: capital S) switch to GCC, and it will emit the assembly code to a file with a .s extension. For example, the following command:
gcc -O2 -S foo.c
will leave the generated assembly code on the file foo.s.
Ripped straight from http://www.delorie.com/djgpp/v2faq/faq8_20.html (but removing erroneous -c
)
It's part of the exception handling. The gcc EH mechanism allows to mix various EH models, and a personality routine is invoked to determine if an exception match, what finalization to invoke, etc. This specific personality routine is for C++ exception handling (as opposed to, say, gcj/Java exception handling).
In gcc, this isn't supported. In fact, this isn't supported in any existing compiler/linker i'm aware of.
I also encountered same problem. I do not know why, i just add -lpthread
option to compiler and everything ok.
Old:
$ g++ -rdynamic -m64 -fPIE -pie -o /tmp/node/out/Release/mksnapshot ...*.o *.a -ldl -lrt
got following error. If i append -lpthread
option to above command then OK.
/usr/bin/ld: /tmp/node/out/Release/obj.host/v8_libbase/deps/v8/src/base/platform/condition-variable.o: undefined reference to symbol 'pthread_condattr_setclock@@GLIBC_2.3.3'
//lib/x86_64-linux-gnu/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
The -I
directive does the job:
gcc -Icore -Ianimator -Iimages -Ianother_dir -Iyet_another_dir my_file.c
The language standard simply doesn't allow for it. Labels can only be followed by statements, and declarations do not count as statements in C. The easiest way to get around this is by inserting an empty statement after your label, which relieves you from keeping track of the scope the way you would need to inside a block.
#include <stdio.h>
int main ()
{
printf("Hello ");
goto Cleanup;
Cleanup: ; //This is an empty statement.
char *str = "World\n";
printf("%s\n", str);
}
After reading the http://wiki.debian.org/Multiarch/LibraryPathOverview that jeremiah posted, i found the gcc flag that works without the symlink:
gcc -B/usr/lib/x86_64-linux-gnu hello.c
So, you can just add -B/usr/lib/x86_64-linux-gnu
to the CFLAGS variable in your Makefile.
I met these warnings on mempcpy
function. Man page says this function is a GNU extension and synopsis shows:
#define _GNU_SOURCE
#include <string.h>
When #define
is added to my source before the #include
, declarations for the GNU extensions are made visible and warnings disappear.
You can also use rlutil:
rlutil.h
),setColor()
, cls()
, getch()
, gotoxy()
, etc.Your code would become something like this:
#include <stdio.h>
#include "rlutil.h"
int main(int argc, char* argv[])
{
setColor(BLUE);
printf("\n \n \t This is dummy program for text color ");
getch();
return 0;
}
Have a look at example.c and test.cpp for C and C++ examples.
Here are some thoughts and ideas:
Use ROM more creatively.
Store anything you can in ROM. Instead of calculating things, store look-up tables in ROM. (Make sure your compiler is outputting your look-up tables to the read-only section! Print out memory addresses at runtime to check!) Store your interrupt vector table in ROM. Of course, run some tests to see how reliable your ROM is compared to your RAM.
Use your best RAM for the stack.
SEUs in the stack are probably the most likely source of crashes, because it is where things like index variables, status variables, return addresses, and pointers of various sorts typically live.
Implement timer-tick and watchdog timer routines.
You can run a "sanity check" routine every timer tick, as well as a watchdog routine to handle the system locking up. Your main code could also periodically increment a counter to indicate progress, and the sanity-check routine could ensure this has occurred.
Implement error-correcting-codes in software.
You can add redundancy to your data to be able to detect and/or correct errors. This will add processing time, potentially leaving the processor exposed to radiation for a longer time, thus increasing the chance of errors, so you must consider the trade-off.
Remember the caches.
Check the sizes of your CPU caches. Data that you have accessed or modified recently will probably be within a cache. I believe you can disable at least some of the caches (at a big performance cost); you should try this to see how susceptible the caches are to SEUs. If the caches are hardier than RAM then you could regularly read and re-write critical data to make sure it stays in cache and bring RAM back into line.
Use page-fault handlers cleverly.
If you mark a memory page as not-present, the CPU will issue a page fault when you try to access it. You can create a page-fault handler that does some checking before servicing the read request. (PC operating systems use this to transparently load pages that have been swapped to disk.)
Use assembly language for critical things (which could be everything).
With assembly language, you know what is in registers and what is in RAM; you know what special RAM tables the CPU is using, and you can design things in a roundabout way to keep your risk down.
Use objdump
to actually look at the generated assembly language, and work out how much code each of your routines takes up.
If you are using a big OS like Linux then you are asking for trouble; there is just so much complexity and so many things to go wrong.
Remember it is a game of probabilities.
A commenter said
Every routine you write to catch errors will be subject to failing itself from the same cause.
While this is true, the chances of errors in the (say) 100 bytes of code and data required for a check routine to function correctly is much smaller than the chance of errors elsewhere. If your ROM is pretty reliable and almost all the code/data is actually in ROM then your odds are even better.
Use redundant hardware.
Use 2 or more identical hardware setups with identical code. If the results differ, a reset should be triggered. With 3 or more devices you can use a "voting" system to try to identify which one has been compromised.
AArch64 is the 64-bit state introduced in the Armv8-A architecture (https://en.wikipedia.org/wiki/ARM_architecture#ARMv8-A). The 32-bit state which is backwards compatible with Armv7-A and previous 32-bit Arm architectures is referred to as AArch32. Therefore the GNU triplet for the 64-bit ISA is aarch64. The Linux kernel community chose to call their port of the kernel to this architecture arm64 rather than aarch64, so that's where some of the arm64 usage comes from.
As far as I know the Apple backend for aarch64 was called arm64 whereas the LLVM community-developed backend was called aarch64 (as it is the canonical name for the 64-bit ISA) and later the two were merged and the backend now is called aarch64.
So AArch64 and ARM64 refer to the same thing.
The compiler needs to know the size of the second dimension in your two dimensional array. For example:
void print_graph(g_node graph_node[], double weight[][5], int nodes);
From info gcc
(emphasis mine):
-ansi
In C mode, this is equivalent to
-std=c90
. In C++ mode, it is equivalent to-std=c++98
. This turns off certain features of GCC that are incompatible with ISO C90 (when compiling C code), or of standard C++ (when compiling C++ code), such as theasm
andtypeof
keywords, and predefined macros such as 'unix' and 'vax' that identify the type of system you are using. It also enables the undesirable and rarely used ISO trigraph feature. For the C compiler, it disables recognition of C++ style//
comments as well as theinline
keyword.
(It uses vax in the example instead of linux because when it was written maybe it was more popular ;-).
The basic idea is that GCC only tries to fully comply with the ISO standards when it is invoked with the -ansi
option.
If you're using MySQL:
SELECT
DATE(created) AS saledate,
SUM(amount)
FROM
Sales
GROUP BY
saledate
If you're using MS SQL 2008:
SELECT
CAST(created AS date) AS saledate,
SUM(amount)
FROM
Sales
GROUP BY
CAST(created AS date)
Just another answer to explain some subtle points in the code of the other answers:
socket.INADDR_ANY
- (Edited) In the context of IP_ADD_MEMBERSHIP
, this doesn't really bind the socket to all interfaces but just choose the default interface where multicast is up (according to routing table)see What does it mean to bind a multicast (UDP) socket? for more on how multicast works
Multicast receiver:
import socket
import struct
import argparse
def run(groups, port, iface=None, bind_group=None):
# generally speaking you want to bind to one of the groups you joined in
# this script,
# but it is also possible to bind to group which is added by some other
# programs (like another python program instance of this)
# assert bind_group in groups + [None], \
# 'bind group not in groups to join'
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
# allow reuse of socket (to allow another instance of python running this
# script binding to the same ip/port)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('' if bind_group is None else bind_group, port))
for group in groups:
mreq = struct.pack(
'4sl' if iface is None else '4s4s',
socket.inet_aton(group),
socket.INADDR_ANY if iface is None else socket.inet_aton(iface))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
while True:
print(sock.recv(10240))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--port', type=int, default=19900)
parser.add_argument('--join-mcast-groups', default=[], nargs='*',
help='multicast groups (ip addrs) to listen to join')
parser.add_argument(
'--iface', default=None,
help='local interface to use for listening to multicast data; '
'if unspecified, any interface would be chosen')
parser.add_argument(
'--bind-group', default=None,
help='multicast groups (ip addrs) to bind to for the udp socket; '
'should be one of the multicast groups joined globally '
'(not necessarily joined in this python program) '
'in the interface specified by --iface. '
'If unspecified, bind to 0.0.0.0 '
'(all addresses (all multicast addresses) of that interface)')
args = parser.parse_args()
run(args.join_mcast_groups, args.port, args.iface, args.bind_group)
sample usage: (run the below in two consoles and choose your own --iface (must be same as the interface that receives the multicast data))
python3 multicast_recv.py --iface='192.168.56.102' --join-mcast-groups '224.1.1.1' '224.1.1.2' '224.1.1.3' --bind-group '224.1.1.2'
python3 multicast_recv.py --iface='192.168.56.102' --join-mcast-groups '224.1.1.4'
Multicast sender:
import socket
import argparse
def run(group, port):
MULTICAST_TTL = 20
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, MULTICAST_TTL)
sock.sendto(b'from multicast_send.py: ' +
f'group: {group}, port: {port}'.encode(), (group, port))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--mcast-group', default='224.1.1.1')
parser.add_argument('--port', default=19900)
args = parser.parse_args()
run(args.mcast_group, args.port)
sample usage: # assume the receiver binds to the below multicast group address and that some program requests to join that group. And to simplify the case, assume the receiver and the sender are under the same subnet
python3 multicast_send.py --mcast-group '224.1.1.2'
python3 multicast_send.py --mcast-group '224.1.1.4'
I would unpack your gem in the application vendor folder
gem unpack your.gem --target /path_to_app/vendor/gems/
Then add the path on the Gemfile to link unpacked gem.
gem 'your', '2.0.1', :path => 'vendor/gems/your'
Something throws an exception of type std::bad_alloc
, indicating that you ran out of memory. This exception is propagated through until main
, where it "falls off" your program and causes the error message you see.
Since nobody here knows what "RectInvoice", "rectInvoiceVector", "vect", "im" and so on are, we cannot tell you what exactly causes the out-of-memory condition. You didn't even post your real code, because w h
looks like a syntax error.
You may use the following functions which I wrote in one of my helper class in a project.
just call
showShareActivity(msg:"message", image: nil, url: nil, sourceRect: nil)
and it will work for both iPhone and iPad. If you pass any view's CGRect value by sourceRect it will also shows a little arrow in iPad.
func topViewController()-> UIViewController{
var topViewController:UIViewController = UIApplication.shared.keyWindow!.rootViewController!
while ((topViewController.presentedViewController) != nil) {
topViewController = topViewController.presentedViewController!;
}
return topViewController
}
func showShareActivity(msg:String?, image:UIImage?, url:String?, sourceRect:CGRect?){
var objectsToShare = [AnyObject]()
if let url = url {
objectsToShare = [url as AnyObject]
}
if let image = image {
objectsToShare = [image as AnyObject]
}
if let msg = msg {
objectsToShare = [msg as AnyObject]
}
let activityVC = UIActivityViewController(activityItems: objectsToShare, applicationActivities: nil)
activityVC.modalPresentationStyle = .popover
activityVC.popoverPresentationController?.sourceView = topViewController().view
if let sourceRect = sourceRect {
activityVC.popoverPresentationController?.sourceRect = sourceRect
}
topViewController().present(activityVC, animated: true, completion: nil)
}
I spent some time on the two main approaches here and they didn't work-out for me. I am using Netbeans for the builds, may be there's more going on there. I had some errors and warnings from Maven 3 with some constructs, but I think those were easy to correct. No biggie.
I did find an answer that looks maintainable and simple to implement in this article on DZone:
I already have a resources/config sub-folder, and I named my file: app.properties, to better reflect the kind of stuff we may keep there (like a support URL, etc.).
The only caveat is that Netbeans gives a warning that the IDE needs filtering off. Not sure where/how. It has no effect at this point. Perhaps there's a work around for that if I need to cross that bridge. Best of luck.
Use the IP instead:
DROP USER 'root'@'127.0.0.1'; GRANT ALL PRIVILEGES ON . TO 'root'@'%';
For more possibilities, see this link.
To create the root user, seeing as MySQL is local & all, execute the following from the command line (Start > Run > "cmd" without quotes):
mysqladmin -u root password 'mynewpassword'
You need to access the matches in order to get at the SDI number. Here is a function that will do it (assuming there is only 1 SDI number per cell).
For the regex, I used "sdi followed by a space and one or more numbers". You had "sdi followed by a space and zero or more numbers". You can simply change the + to * in my pattern to go back to what you had.
Function ExtractSDI(ByVal text As String) As String
Dim result As String
Dim allMatches As Object
Dim RE As Object
Set RE = CreateObject("vbscript.regexp")
RE.pattern = "(sdi \d+)"
RE.Global = True
RE.IgnoreCase = True
Set allMatches = RE.Execute(text)
If allMatches.count <> 0 Then
result = allMatches.Item(0).submatches.Item(0)
End If
ExtractSDI = result
End Function
If a cell may have more than one SDI number you want to extract, here is my RegexExtract function. You can pass in a third paramter to seperate each match (like comma-seperate them), and you manually enter the pattern in the actual function call:
Ex) =RegexExtract(A1, "(sdi \d+)", ", ")
Here is:
Function RegexExtract(ByVal text As String, _
ByVal extract_what As String, _
Optional seperator As String = "") As String
Dim i As Long, j As Long
Dim result As String
Dim allMatches As Object
Dim RE As Object
Set RE = CreateObject("vbscript.regexp")
RE.pattern = extract_what
RE.Global = True
Set allMatches = RE.Execute(text)
For i = 0 To allMatches.count - 1
For j = 0 To allMatches.Item(i).submatches.count - 1
result = result & seperator & allMatches.Item(i).submatches.Item(j)
Next
Next
If Len(result) <> 0 Then
result = Right(result, Len(result) - Len(seperator))
End If
RegexExtract = result
End Function
*Please note that I have taken "RE.IgnoreCase = True" out of my RegexExtract, but you could add it back in, or even add it as an optional 4th parameter if you like.
Because of that this is one of the top Google results for this problem I would like to add what solved this problem for me:
I had to remove the format(opentype) from the src of the font-face, then it worked in Firefox as well. It worked fine in Chrome and Safari before that.
Use this:
import boto3
s3 = boto3.client('s3')
bucket_name = "YOUR-BUCKET-NAME"
directory_name = "DIRECTORY/THAT/YOU/WANT/TO/CREATE" #it's name of your folders
s3.put_object(Bucket=bucket_name, Key=(directory_name+'/'))
############################################################################
# 'A Generic Makefile for Building Multiple main() Targets in $PWD'
# Author: Robert A. Nader (2012)
# Email: naderra at some g
# Web: xiberix
############################################################################
# The purpose of this makefile is to compile to executable all C source
# files in CWD, where each .c file has a main() function, and each object
# links with a common LDFLAG.
#
# This makefile should suffice for simple projects that require building
# similar executable targets. For example, if your CWD build requires
# exclusively this pattern:
#
# cc -c $(CFLAGS) main_01.c
# cc main_01.o $(LDFLAGS) -o main_01
#
# cc -c $(CFLAGS) main_2..c
# cc main_02.o $(LDFLAGS) -o main_02
#
# etc, ... a common case when compiling the programs of some chapter,
# then you may be interested in using this makefile.
#
# What YOU do:
#
# Set PRG_SUFFIX_FLAG below to either 0 or 1 to enable or disable
# the generation of a .exe suffix on executables
#
# Set CFLAGS and LDFLAGS according to your needs.
#
# What this makefile does automagically:
#
# Sets SRC to a list of *.c files in PWD using wildcard.
# Sets PRGS BINS and OBJS using pattern substitution.
# Compiles each individual .c to .o object file.
# Links each individual .o to its corresponding executable.
#
###########################################################################
#
PRG_SUFFIX_FLAG := 0
#
LDFLAGS :=
CFLAGS_INC :=
CFLAGS := -g -Wall $(CFLAGS_INC)
#
## ==================- NOTHING TO CHANGE BELOW THIS LINE ===================
##
SRCS := $(wildcard *.c)
PRGS := $(patsubst %.c,%,$(SRCS))
PRG_SUFFIX=.exe
BINS := $(patsubst %,%$(PRG_SUFFIX),$(PRGS))
## OBJS are automagically compiled by make.
OBJS := $(patsubst %,%.o,$(PRGS))
##
all : $(BINS)
##
## For clarity sake we make use of:
.SECONDEXPANSION:
OBJ = $(patsubst %$(PRG_SUFFIX),%.o,$@)
ifeq ($(PRG_SUFFIX_FLAG),0)
BIN = $(patsubst %$(PRG_SUFFIX),%,$@)
else
BIN = $@
endif
## Compile the executables
%$(PRG_SUFFIX) : $(OBJS)
$(CC) $(OBJ) $(LDFLAGS) -o $(BIN)
##
## $(OBJS) should be automagically removed right after linking.
##
veryclean:
ifeq ($(PRG_SUFFIX_FLAG),0)
$(RM) $(PRGS)
else
$(RM) $(BINS)
endif
##
rebuild: veryclean all
##
## eof Generic_Multi_Main_PWD.makefile
I found the solution for this problem here. Don't forget to allow verb OPTIONS on IIS app service handler.
Works fine. Thank you André Pedroso. :-)
The java.net.URL
class is in fact not at all a good way of validating URLs. MalformedURLException
is not thrown on all malformed URLs during construction. Catching IOException
on java.net.URL#openConnection().connect()
does not validate URL either, only tell wether or not the connection can be established.
Consider this piece of code:
try {
new URL("http://.com");
new URL("http://com.");
new URL("http:// ");
new URL("ftp://::::@example.com");
} catch (MalformedURLException malformedURLException) {
malformedURLException.printStackTrace();
}
..which does not throw any exceptions.
I recommend using some validation API implemented using a context free grammar, or in very simplified validation just use regular expressions. However I need someone to suggest a superior or standard API for this, I only recently started searching for it myself.
Note
It has been suggested that URL#toURI()
in combination with handling of the exception java.net. URISyntaxException
can facilitate validation of URLs. However, this method only catches one of the very simple cases above.
The conclusion is that there is no standard java URL parser to validate URLs.
Here is an example for Python 3 that you can edit for Python 2 ;)
from tkinter import *
from PIL import ImageTk, Image
from tkinter import filedialog
import os
root = Tk()
root.geometry("550x300+300+150")
root.resizable(width=True, height=True)
def openfn():
filename = filedialog.askopenfilename(title='open')
return filename
def open_img():
x = openfn()
img = Image.open(x)
img = img.resize((250, 250), Image.ANTIALIAS)
img = ImageTk.PhotoImage(img)
panel = Label(root, image=img)
panel.image = img
panel.pack()
btn = Button(root, text='open image', command=open_img).pack()
root.mainloop()
I would recommend Toad data modeller
This font is not standard on all devices. It is installed by default on some Macs, but rarely on PCs and mobile devices.
To use this font on all devices, use a @font-face declaration in your CSS to link to it on your domain if you wish to use it.
@font-face { font-family: Delicious; src: url('Delicious-Roman.otf'); }
@font-face { font-family: Delicious; font-weight: bold; src: url('Delicious-Bold.otf'); }
Taken from css3.info
The standard C++ solution presented here by litb will not work as expected if the method happens to be defined in a base class.
For a solution that handles this situation refer to :
In Russian : http://www.rsdn.ru/forum/message/2759773.1.aspx
English Translation by Roman.Perepelitsa : http://groups.google.com/group/comp.lang.c++.moderated/tree/browse_frm/thread/4f7c7a96f9afbe44/c95a7b4c645e449f?pli=1
It is insanely clever. However one issue with this solutiion is that gives compiler errors if the type being tested is one that cannot be used as a base class (e.g. primitive types)
In Visual Studio, I noticed that if working with method having no arguments, an extra pair of redundant ( ) needs to be inserted around the argments to deduce( ) in the sizeof expression.
This can also happen due to the bad unzipping process of SDK.It Happend to me. Dont use inbuilt windows unzip process. use WINRAR software for unzipping sdk
I am a Mac OS user & following credential pair worked for me:
Username: admin
Password: admin
It's a silly suggestion, but make sure there is no typo in the branch name!
If a TestRestTemplate is a valid option in your unit test, this documentation might be relevant
Short answer: if using
@SpringBootTest(webEnvironment=WebEnvironment.RANDOM_PORT)
then @Autowired
will work. If using
@SpringBootTest(webEnvironment=WebEnvironment.MOCK)
then create a TestRestTemplate like this
private TestRestTemplate template = new TestRestTemplate();
Going down your list:
Utf32String
class as part of my MiscUtil library, should you ever want it. (It's not been very thoroughly tested, mind you.)There's more on my Unicode page and tips for debugging Unicode problems.
The other big resource of code is unicode.org which contains more information than you'll ever be able to work your way through - possibly the most useful bit is the code charts.
try to show popup like this
new Handler().postDelayed(new Runnable(){
public void run() {
popupWindow.showAtLocation(context.getWindow().getDecorView(), Gravity.CENTER,0,0);
}
}, 200L);
You can do like this:
private void datagridview1_SelectionChanged(object sender, EventArgs e)
{
if (datagridview1.SelectedCells.Count > 0)
{
int selectedrowindex = datagridview1.SelectedCells[0].RowIndex;
DataGridViewRow selectedRow = datagridview1.Rows[selectedrowindex];
string cellValue = Convert.ToString(selectedRow.Cells["enter column name"].Value);
}
}
Per the specification, the JSON grammar's char production can take the following values:
"
-or-\
-or-control-character\"
\\
\/
\b
\f
\n
\r
\t
\u
four-hex-digitsNewlines are "control characters", so no, you may not have a literal newline within your string. However, you may encode it using whatever combination of \n
and \r
you require.
The JSONLint tool confirms that your JSON is invalid.
And, if you want to write newlines inside your JSON syntax without actually including newlines in the data, then you're doubly out of luck. While JSON is intended to be human-friendly to a degree, it is still data and you're trying to apply arbitrary formatting to that data. That is absolutely not what JSON is about.
Code to add audio to video using ffmpeg.
If audio length is greater than video length it will cut the audio to video length. If you want full audio in video remove -shortest from the cmd.
String[] cmd = new String[]{"-i", selectedVideoPath,"-i",audiopath,"-map","1:a","-map","0:v","-codec","copy", ,outputFile.getPath()};
private void execFFmpegBinaryShortest(final String[] command) {
final File outputFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/videoaudiomerger/"+"Vid"+"output"+i1+".mp4");
String[] cmd = new String[]{"-i", selectedVideoPath,"-i",audiopath,"-map","1:a","-map","0:v","-codec","copy","-shortest",outputFile.getPath()};
try {
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onFailure(String s) {
System.out.println("on failure----"+s);
}
@Override
public void onSuccess(String s) {
System.out.println("on success-----"+s);
}
@Override
public void onProgress(String s) {
//Log.d(TAG, "Started command : ffmpeg "+command);
System.out.println("Started---"+s);
}
@Override
public void onStart() {
//Log.d(TAG, "Started command : ffmpeg " + command);
System.out.println("Start----");
}
@Override
public void onFinish() {
System.out.println("Finish-----");
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
// do nothing for now
System.out.println("exceptio :::"+e.getMessage());
}
}
Handling key events consistently is not at all easy.
Firstly, there are two different types of codes: keyboard codes (a number representing the key on the keyboard the user pressed) and character codes (a number representing a Unicode character). You can only reliably get character codes in the keypress
event. Do not try to get character codes for keyup
and keydown
events.
Secondly, you get different sets of values in a keypress
event to what you get in a keyup
or keydown
event.
I recommend this page as a useful resource. As a summary:
If you're interested in detecting a user typing a character, use the keypress
event. IE bizarrely only stores the character code in keyCode
while all other browsers store it in which
. Some (but not all) browsers also store it in charCode
and/or keyCode
. An example keypress handler:
function(evt) {
evt = evt || window.event;
var charCode = evt.which || evt.keyCode;
var charStr = String.fromCharCode(charCode);
alert(charStr);
}
If you're interested in detecting a non-printable key (such as a cursor key), use the keydown
event. Here keyCode
is always the property to use. Note that keyup
events have the same properties.
function(evt) {
evt = evt || window.event;
var keyCode = evt.keyCode;
// Check for left arrow key
if (keyCode == 37) {
alert("Left arrow");
}
}
In attr('value')
you're specifically saying you're looking for the value of an attribute named vaule
. It is preferable to use val()
as this is jQuery's out of the box feature for extracting the value out of form elements.
This will work if you are not blocking.
If you are planing on doing sleeps, its absolutely imperative that you use the event to do the sleep. If you leverage the event to sleep, if someone tells you to stop while "sleeping" it will wake up. If you use time.sleep()
your thread will only stop after it wakes up.
import threading
import time
duration = 2
def main():
t1_stop = threading.Event()
t1 = threading.Thread(target=thread1, args=(1, t1_stop))
t2_stop = threading.Event()
t2 = threading.Thread(target=thread2, args=(2, t2_stop))
time.sleep(duration)
# stops thread t2
t2_stop.set()
def thread1(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
def thread2(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
This will return TRUE
for #VALUE!
errors (ERROR.TYPE = 3) and FALSE
for anything else.
=IF(ISERROR(A1),ERROR.TYPE(A1)=3)
Simply add Templates URL:
<a href="{% url 'service_data' d.id %}">
...XYZ
</a>
Used in django 2.0
I finally found my way through. In short, let's say your username is joe
and you hold a website under your personal filesystem /home/joe/path/to/website
.
You literally have to tell the system that nginx
is your pal.
Place nginx
in joe
group :
sudo gpasswd -a nginx joe
After that if it still doesn't work, check right access of /home/joe
directory. That's probably the reason why nginx can't reach the file because even if he is your friend now you have to open him the door to your house :
sudo chmod g+x /home/joe
That's it. That's literally all you have to do to give nginx access to your local files :)
I don't think there are security concerns with this method because nginx
is the high authority and only an admin can change the group. nginx
can now read what's in joe
directories. It's only a security breach if the holder of the nginx
account is different with the user you open directory access from, but in my case I'm the holder of both parties, that is in a local context.
The way to use the ellipsis or varargs inside the method is as if it were an array:
public void PrintWithEllipsis(String...setOfStrings) {
for (String s : setOfStrings)
System.out.println(s);
}
This method can be called as following:
obj.PrintWithEllipsis(); // prints nothing
obj.PrintWithEllipsis("first"); // prints "first"
obj.PrintWithEllipsis("first", "second"); // prints "first\nsecond"
Inside PrintWithEllipsis
, the type of setOfStrings
is an array of String.
So you could save the compiler some work and pass an array:
String[] argsVar = {"first", "second"};
obj.PrintWithEllipsis(argsVar);
For varargs methods, a sequence parameter is treated as being an array of the same type. So if two signatures differ only in that one declares a sequence and the other an array, as in this example:
void process(String[] s){}
void process(String...s){}
then a compile-time error occurs.
Source: The Java Programming Language specification, where the technical term is variable arity parameter
rather than the common term varargs
.
Simple solution would be to create a directive which you can apply to any control.
import { Directive, ElementRef, Input, Renderer, HostListener, Output, EventEmitter } from '@angular/core';
import { NgControl } from '@angular/forms';
@Directive({
selector: '[ngModel][debounce]',
})
export class Debounce
{
@Output() public onDebounce = new EventEmitter<any>();
@Input('debounce') public debounceTime: number = 500;
private modelValue = null;
constructor(public model: NgControl, el: ElementRef, renderer: Renderer){
}
ngOnInit(){
this.modelValue = this.model.value;
if (!this.modelValue){
var firstChangeSubs = this.model.valueChanges.subscribe(v =>{
this.modelValue = v;
firstChangeSubs.unsubscribe()
});
}
this.model.valueChanges
.debounceTime(this.debounceTime)
.distinctUntilChanged()
.subscribe(mv => {
if (this.modelValue != mv){
this.modelValue = mv;
this.onDebounce.emit(mv);
}
});
}
}
usage would be
<textarea [ngModel]="somevalue"
[debounce]="2000"
(onDebounce)="somevalue = $event"
rows="3">
</textarea>
My favourite way to do this is fairly simple:
if (false !== ($data = file_get_contents("http://www.google.com"))) {
$error = error_get_last();
echo "HTTP request failed. Error was: " . $error['message'];
} else {
echo "Everything went better than expected";
}
I found this after experimenting with the try/catch
from @enobrev above, but this allows for less lengthy (and IMO, more readable) code. We simply use error_get_last
to get the text of the last error, and file_get_contents
returns false on failure, so a simple "if" can catch that.
As of March 30, 2012, I have tried and failed to get the sloonz fork on GitHub to open images. I got it to compile ok, but it didn't actually work. I also tried building gohlke's library, and it compiled also but failed to open any images. Someone mentioned PythonMagick above, but it only compiles on Windows. See PythonMagick on the wxPython wiki.
PIL was last updated in 2009, and while it's website says they are working on a Python 3 port, it's been 3 years, and the mailing list has gone cold.
To solve my Python 3 image manipulation problem, I am using subprocess.call()
to execute ImageMagick shell commands. This method works.
Without any third-party tools and any app, you can find unused CSS and javascript by using chrome dev tools in the coverage tab. read the post below from google developers. chrome coverage tab
hash_map is a non-standard extension. unordered_map is part of std::tr1, and will be moved into the std namespace for C++0x. http://en.wikipedia.org/wiki/Unordered_map_%28C%2B%2B%29
$date=date("Y-m-d");
echo"$date";
echo"<br>SELECT DATE: <input type='date' name='date' id='datepicker'
value='$date' required >";
This works well for me:
WifiConfiguration apConfig = null;
Method method = wifimanager.getClass().getMethod("setWifiApEnabled", WifiConfiguration.class, Boolean.TYPE);
method.invoke(wifimanager, apConfig, true);
Here's what I have done to ensure things aren't executed twice:
$(document).on("page:change", function() {
// ... init things, just do not bind events ...
$(document).off("page:change");
});
I find using the jquery-turbolinks
gem or combining $(document).ready
and $(document).on("page:load")
or using $(document).on("page:change")
by itself behaves unexpectedly--especially if you're in development.
An intent is an abstract description of an operation to be performed. It can be used with startActivity to launch an Activity, broadcastIntent to send it to any interested BroadcastReceiver components, and startService(Intent) or bindService(Intent, ServiceConnection, int) to communicate with a background Service.
For more details see these links :
1). http://developer.android.com/reference/android/content/Intent.html
2) http://developer.android.com/guide/topics/intents/intents-filters.html
3). http://www.vogella.de/articles/AndroidIntent/article.html
there are many more articles are available.
I don't think you can mount a Linux folder as a network drive under windows having only access to ssh. I can suggest you to use WinSCP that allows you to transfer file through ssh and it's free.
EDIT: well, sorry. Vinko posted before me and now i've learned a new thing :)
You've already stated why np.maximum
is different - it returns an array that is the element-wise maximum between two arrays.
As for np.amax
and np.max
: they both call the same function - np.max
is just an alias for np.amax
, and they compute the maximum of all elements in an array, or along an axis of an array.
In [1]: import numpy as np
In [2]: np.amax
Out[2]: <function numpy.core.fromnumeric.amax>
In [3]: np.max
Out[3]: <function numpy.core.fromnumeric.amax>
If you mean like a text in the background, I'd say you use a label with the input field and position it on the input using CSS, of course. With JS, you fade out the label when the input receives values and fade it in when the input is empty. In this way, it is not possible for the user to submit the description, whether by accident or intent.
One thing to note in addition to the approaches suggested is that, in OS X 10.5 (Leopard) at least, the variables set in launchd.conf
will be merged with the settings made in .profile
. I suppose this is likely to be valid for the settings in ~/.MacOSX/environment.plist
too, but I haven't verified.
In the Eclipse top menu bar:
Windows -> Preferences -> Java -> Compiler -> Errors/Warnings ->
Deprecated and restricted API -> Forbidden reference (access rules): -> change to warning
Here is simple command that executes ifconfig
shell command of Linux
var process = require('child_process');
process.exec('ifconfig',function (err,stdout,stderr) {
if (err) {
console.log("\n"+stderr);
} else {
console.log(stdout);
}
});
recreate()
(as mentioned by TPReal) will only restart current activity, but the previous activities will still be in back stack and theme will not be applied to them.
So, another solution for this problem is to recreate the task stack completely, like this:
TaskStackBuilder.create(getActivity())
.addNextIntent(new Intent(getActivity(), MainActivity.class))
.addNextIntent(getActivity().getIntent())
.startActivities();
EDIT:
Just put the code above after you perform changing of theme on the UI or somewhere else. All your activities should have method setTheme()
called before onCreate()
, probably in some parent activity. It is also a normal approach to store the theme chosen in SharedPreferences
, read it and then set using setTheme()
method.
$ee = array('a' => 50, 'b' => 25, 'c' => 5, 'd' => 80, 'e' => 40, 'f' => 152, 'g' => 45, 'h' => 28);
$Acurr = '';
$Amax = 0;
foreach($ee as $key => $value) {
$Acurr = $value;
if($Acurr >= $Amax) {
$Amax = $Acurr;
}
}
echo "greatest number is $Amax";
prods.Remove(prods.Single(p=>p.ID == 1));
you can't modify collection in foreach, as Vincent suggests
You have to reference the System.Configuration
assembly which is in GAC.
Use of ConfigurationManager
is not WPF-specific: it is the privileged way to access configuration information for any type of application.
Please see Microsoft Docs - ConfigurationManager
Class for further info.
@David, thanks for the recommendation! @fluid_chelsea, I've just released Any+Time(TM) version 3.x which uses jQuery instead of Prototype and has a much-improved interface, so I hope it now meets your needs:
Any problems, please let me know via the comment link on my website!
After installing on kubuntu, I found that my pycharm script in ~/bin/pycharm
was just a desktop entry:
[Desktop Entry]
Version=1.0
Type=Application
Name=PyCharm Community Edition
Icon=/snap/pycharm-community/79/bin/pycharm.png
Exec=env BAMF_DESKTOP_FILE_HINT=/var/lib/snapd/desktop/applications/pycharm-community_pycharm-community.desktop /snap/bin/pycharm-community %f
Comment=Python IDE for Professional Developers
Categories=Development;IDE;
Terminal=false
StartupWMClass=jetbrains-pycharm-ce
Obviously, I could not use this to open anything from the command line:
$ pycharm setup.py
/home/eldond/bin/pycharm_old: line 1: [Desktop: command not found
/home/eldond/bin/pycharm_old: line 4: Community: command not found
But there's a hint in the desktop entry file. Looking in /snap/pycharm-community/
, I found /snap/pycharm-community/current/bin/pycharm.sh
. I removed ~/bin/pycharm
(actually renamed it to have a backup) and then did
ln -s /snap/pycharm-community/current/bin/pycharm.sh pycharm
where again, I found the start of the path by inspecting the desktop entry script I had to start with.
Now I can open files with pycharm from the command line. I don't know what I messed up during install this time; the last two times I've done fresh installs, it's had no trouble.
It maybe solve your problem, check your files access level
$ sudo chmod -R 777 /"your files location"
Use:
IF Var IS NULL THEN
var := 5;
END IF;
Oracle 9i+:
var = COALESCE(Var, 5)
Other alternatives:
var = NVL(var, 5)
Reference:
For most of my select options, I start off with an option that simply says 'Please Select' or something similar and that option is always disabled. Then whenever you want to clear your select/option's you can do just do something like this.
Example
<select id="mySelectOption">
<option value="" selected disabled>Please select</option>
</select>
Answer
$('#mySelectOption').val('Please Select');
There should be a line in your postgresql.conf
file that says:
port = 1486
Change that.
The location of the file can vary depending on your install options. On Debian-based distros it is /etc/postgresql/8.3/main/
On Windows it is C:\Program Files\PostgreSQL\9.3\data
Don't forget to sudo service postgresql restart
for changes to take effect.
If you're not too attached to your current workspace you can create a new workspace, follow BalusC's steps for server creation, and recreate your project in the new workspace.
I got the same error after installing Eclipse Java EE IDE for Web Developers(Juno) but using the workspace of a much older Eclipse installation. When I created a new workspace I was able to get my Tomcat server running without this error.
There are three places where a file, say, can be - the (committed) tree, the index and the working copy. When you just add a file to a folder, you are adding it to the working copy.
When you do something like git add file
you add it to the index. And when you commit it, you add it to the tree as well.
It will probably help you to know the three more common flags in git reset
:
git reset [--
<mode>
] [<commit>
]This form resets the current branch head to
<commit>
and possibly updates the index (resetting it to the tree of<commit>
) and the working tree depending on<mode>
, which must be one of the following:
--softDoes not touch the index file nor the working tree at all (but resets the head to
<commit>
, just like all modes do). This leaves all your changed files "Changes to be committed", as git status would put it.--mixed
Resets the index but not the working tree (i.e., the changed files are preserved but not marked for commit) and reports what has not been updated. This is the default action.
--hard
Resets the index and working tree. Any changes to tracked files in the working tree since
<commit>
are discarded.
Now, when you do something like git reset HEAD
, what you are actually doing is git reset HEAD --mixed
and it will "reset" the index to the state it was before you started adding files / adding modifications to the index (via git add
). In this case, no matter what the state of the working copy was, you didn't change it a single bit, but you changed the index in such a way that is now in sync with the HEAD of the tree. Whether git add
was used to stage a previously committed but changed file, or to add a new (previously untracked) file, git reset HEAD
is the exact opposite of git add
.
git rm
, on the other hand, removes a file from the working directory and the index, and when you commit, the file is removed from the tree as well. git rm --cached
, however, removes the file from the index alone and keeps it in your working copy. In this case, if the file was previously committed, then you made the index to be different from the HEAD of the tree and the working copy, so that the HEAD now has the previously committed version of the file, the index has no file at all, and the working copy has the last modification of it. A commit now will sync the index and the tree, and the file will be removed from the tree (leaving it untracked in the working copy). When git add
was used to add a new (previously untracked) file, then git rm --cached
is the exact opposite of git add
(and is pretty much identical to git reset HEAD
).
Git 2.25 introduced a new command for these cases, git restore
, but as of Git 2.28 it is described as “experimental” in the man page, in the sense that the behavior may change.
If you cannot use std::to_string
from C++11, you can write it as it is defined on cppreference.com:
std::string to_string( int value )
Converts a signed decimal integer to a string with the same content as whatstd::sprintf(buf, "%d", value)
would produce for sufficiently large buf.
Implementation
#include <cstdio>
#include <string>
#include <cassert>
std::string to_string( int x ) {
int length = snprintf( NULL, 0, "%d", x );
assert( length >= 0 );
char* buf = new char[length + 1];
snprintf( buf, length + 1, "%d", x );
std::string str( buf );
delete[] buf;
return str;
}
You can do more with it. Just use "%g"
to convert float or double to string, use "%x"
to convert int to hex representation, and so on.
Just to suggest another way without using if statements, you can use the get()
method for DataFrame
s. For performing the sum based on the question:
df['sum'] = df.get('A', df['B']) + df['C']
The DataFrame
get method has similar behavior as python dictionaries.
you can use overflow property to the container div if you don't have any div to show over the container eg:
<div class="cointainer">
<div class="one">Content One</div>
<div class="two">Content Two</div>
</div>
Here is the following css:
.container{
width:100%;/* As per your requirment */
height:auto;
float:left;
overflow:hidden;
}
.one{
width:200px;/* As per your requirment */
height:auto;
float:left;
}
.two{
width:200px;/* As per your requirment */
height:auto;
float:left;
}
-----------------------OR------------------------------
<div class="cointainer">
<div class="one">Content One</div>
<div class="two">Content Two</div>
<div class="clearfix"></div>
</div>
Here is the following css:
.container{
width:100%;/* As per your requirment */
height:auto;
float:left;
overflow:hidden;
}
.one{
width:200px;/* As per your requirment */
height:auto;
float:left;
}
.two{
width:200px;/* As per your requirment */
height:auto;
float:left;
}
.clearfix:before,
.clearfix:after{
display: table;
content: " ";
}
.clearfix:after{
clear: both;
}
Take a look into NSColorWell class reference.
GET vs. POST
1) Both GET and POST create an array (e.g. array( key => value, key2 => value2, key3 => value3, ...)). This array holds key/value pairs, where keys are the names of the form controls and values are the input data from the user.
2) Both GET and POST are treated as $_GET and $_POST. These are superglobals, which means that they are always accessible, regardless of scope - and you can access them from any function, class or file without having to do anything special.
3) $_GET is an array of variables passed to the current script via the URL parameters.
4) $_POST is an array of variables passed to the current script via the HTTP POST method.
When to use GET?
Information sent from a form with the GET method is visible to everyone (all variable names and values are displayed in the URL). GET also has limits on the amount of information to send. The limitation is about 2000 characters. However, because the variables are displayed in the URL, it is possible to bookmark the page. This can be useful in some cases.
GET may be used for sending non-sensitive data.
Note: GET should NEVER be used for sending passwords or other sensitive information!
When to use POST?
Information sent from a form with the POST method is invisible to others (all names/values are embedded within the body of the HTTP request) and has no limits on the amount of information to send.
Moreover POST supports advanced functionality such as support for multi-part binary input while uploading files to server.
However, because the variables are not displayed in the URL, it is not possible to bookmark the page.
Communicating through processes
Example:
Python: This python code block should return random temperatures.
# sensor.py
import random, time
while True:
time.sleep(random.random() * 5) # wait 0 to 5 seconds
temperature = (random.random() * 20) - 5 # -5 to 15
print(temperature, flush=True, end='')
Javascript (Nodejs): Here we will need to spawn a new child process to run our python code and then get the printed output.
// temperature-listener.js
const { spawn } = require('child_process');
const temperatures = []; // Store readings
const sensor = spawn('python', ['sensor.py']);
sensor.stdout.on('data', function(data) {
// convert Buffer object to Float
temperatures.push(parseFloat(data));
console.log(temperatures);
});
simpler with linq:
public void KillProcessesAssociatedToFile(string file)
{
GetProcessesAssociatedToFile(file).ForEach(x =>
{
x.Kill();
x.WaitForExit(10000);
});
}
public List<Process> GetProcessesAssociatedToFile(string file)
{
return Process.GetProcesses()
.Where(x => !x.HasExited
&& x.Modules.Cast<ProcessModule>().ToList()
.Exists(y => y.FileName.ToLowerInvariant() == file.ToLowerInvariant())
).ToList();
}
Try this:
HTML:
<input type="submit" value="submit" name="submit" onclick="myfunction()">
jQuery:
<script type="text/javascript">
function myfunction()
{
var url = $(location).attr('href');
$('#spn_url').html('<strong>' + url + '</strong>');
}
</script>
Summary of all answers (Advantages & Disadvantages)
For single recyclerview
you can use it inside Coordinator layout.
Advantage - it will not load entire recyclerview items. So smooth loading.
Disadvantage - you can't load two recyclerview inside Coordinator layout - it produce scrolling problems
reference - https://stackoverflow.com/a/33143512/3879847
For multiple recylerview with minimum rows
you can load inside NestedScrollView
Advantage - it will scroll smoothly
Disadvantage - It load all rows of recyclerview so your activity open with delay
reference - https://stackoverflow.com/a/33143512/3879847
For multiple recylerview with large rows(more than 100)
You must go with recyclerview.
Advantage - Scroll smoothly, load smoothly
Disadvantage - You need to write more code and logic
Load each recylerview inside main recyclerview with help of multi-viewholders
ex:
MainRecyclerview
-ChildRecyclerview1 (ViewHolder1) -ChildRecyclerview2 (ViewHolder2) -ChildRecyclerview3 (ViewHolder3) -Any other layout (ViewHolder4)
Reference for multi-viewHolder - https://stackoverflow.com/a/26245463/3879847
Based on Joseph Silber's answer, I would do it like that, a bit more generic.
You would have your function (let's create one based on the question):
function videoStopped(newState){
if (newState == -1) {
alert('VIDEO HAS STOPPED');
}
}
And you could have a wait function:
function wait(milliseconds, foo, arg){
setTimeout(function () {
foo(arg); // will be executed after the specified time
}, milliseconds);
}
At the end you would have:
wait(5000, videoStopped, newState);
That's a solution, I would rather not use arguments in the wait function (to have only foo();
instead of foo(arg);
) but that's for the example.
As of npm 5.2.0, once you've installed locally via
npm i typescript --save-dev
...you no longer need an entry in the scripts
section of package.json
-- you can now run the compiler with npx:
npx tsc
Now you don't have to update your package.json file every time you want to compile with different arguments.
The ".a" at the end of your DLL files is a problem, and those are there because you didn't use CMAKE to build OpenCV 2.0. Additionally you do not link to the DLL files, you link to the library files, and again, the reason you do not see the correct library files is because you didn't use CMAKE to build OpenCV 2.0. If you want to use OpenCV 2.0 you must build it for it to work correctly in Visual Studio. If you do not want to build it then I would suggest downgrading to OpenCV 1.1pre, it comes pre-built and is much more forgiving in Visual Studio.
Another option (and the one I would recommend) is to abandon OpenCV and go with EmguCV. I have been playing with OpenCV for about a year and things got much easier when I switched to EmguCV because EmguCV works with .NET, so you can use a language like C# that does not come with all the C++ baggage of pointers, header files, and memory allocation problem.
And as for the question of 64bit vs. 32bit, OpenCV does not officially support 64bit. To be on the safe side open your project properties and change the "Platform Target" under the "Build" tab from "Any CPU" to "X86". This should be done any time you do anything with OpenCV, even if you are using a wrapper like EmguCV.
Slug is a URL friendly short label for specific content. It only contain Letters, Numbers, Underscores or Hyphens. Slugs are commonly save with the respective content and it pass as a URL string.
Slug can create using SlugField
Ex:
class Article(models.Model):
title = models.CharField(max_length=100)
slug = models.SlugField(max_length=100)
If you want to use title as slug, django has a simple function called slugify
from django.template.defaultfilters import slugify
class Article(models.Model):
title = models.CharField(max_length=100)
def slug(self):
return slugify(self.title)
If it needs uniqueness, add unique=True
in slug field.
for instance, from the previous example:
class Article(models.Model):
title = models.CharField(max_length=100)
slug = models.SlugField(max_length=100, unique=True)
Are you lazy to do slug process ? don't worry, this plugin will help you. django-autoslug
Here's a small improvement that to @UltraInstinct's Tee class, modified to be a context manager and also captures any exceptions.
import traceback
import sys
# Context manager that copies stdout and any exceptions to a log file
class Tee(object):
def __init__(self, filename):
self.file = open(filename, 'w')
self.stdout = sys.stdout
def __enter__(self):
sys.stdout = self
def __exit__(self, exc_type, exc_value, tb):
sys.stdout = self.stdout
if exc_type is not None:
self.file.write(traceback.format_exc())
self.file.close()
def write(self, data):
self.file.write(data)
self.stdout.write(data)
def flush(self):
self.file.flush()
self.stdout.flush()
To use the context manager:
print("Print")
with Tee('test.txt'):
print("Print+Write")
raise Exception("Test")
print("Print")
I met the same problem in Win10 32bit OS. I resolved the problem by changing the DLL from debug to release version.
I think it is because the debug version DLL depends on other DLL, and the release version did not.
Source control is best way to handle this problem, if you don't want to pay then try bitbucket
It's free, allows private repo for upto 5 members team.
Same for XAML's arc. Just close the 99.99% arc with a Z
and you've got a circle!
The simplest solution. in your application.properties file. add the following property as mentioned by a previous answer:
spring.main.web-environment=false
For version 2.0.0 of Spring boot starter, use the following property :
spring.main.web-application-type=none
For documentation on all properties use this link : https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html
Make sure you image is a relative path such as:
@Url.Content("~/Content/images/myimage.png")
MVC4
<img src="~/Content/images/myimage.png" />
You could convert the byte[]
into a Base64
string
on the fly.
string base64String = Convert.ToBase64String(imageBytes);
<img src="@String.Format("data:image/png;base64,{0}", base64string)" />
One can use a type-check library like https://github.com/arasatasaygin/is.js or just extract a check snippet from there (https://github.com/arasatasaygin/is.js/blob/master/is.js#L131):
is.nan = function(value) { // NaN is number :)
return value !== value;
};
// is a given value number?
is.number = function(value) {
return !is.nan(value) && Object.prototype.toString.call(value) === '[object Number]';
};
In general if you need it to validate parameter types (on entry point of function call), you can go with JSDOC-compliant contracts (https://www.npmjs.com/package/bycontract):
/**
* This is JSDOC syntax
* @param {number|string} sum
* @param {Object.<string, string>} payload
* @param {function} cb
*/
function foo( sum, payload, cb ) {
// Test if the contract is respected at entry point
byContract( arguments, [ "number|string", "Object.<string, string>", "function" ] );
}
// Test it
foo( 100, { foo: "foo" }, function(){}); // ok
foo( 100, { foo: 100 }, function(){}); // exception
Try to use more than 360 to avoid restarting.
I use 3600 insted of 360 and this works fine for me:
<rotate xmlns:android="http://schemas.android.com/apk/res/android"
android:fromDegrees="0"
android:toDegrees="3600"
android:interpolator="@android:anim/linear_interpolator"
android:repeatCount="infinite"
android:duration="8000"
android:pivotX="50%"
android:pivotY="50%" />
First of all, I don't think that your intention is to actually use punctuation as delimiters in the split functions. Your description suggests that you simply want to eliminate punctuation from the resultant strings.
I come across this pretty frequently, and my usual solution doesn't require re.
(requires import string
):
split_without_punc = lambda text : [word.strip(string.punctuation) for word in
text.split() if word.strip(string.punctuation) != '']
# Call function
split_without_punc("Hey, you -- what are you doing?!")
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
As a traditional function, this is still only two lines with a list comprehension (in addition to import string
):
def split_without_punctuation2(text):
# Split by whitespace
words = text.split()
# Strip punctuation from each word
return [word.strip(ignore) for word in words if word.strip(ignore) != '']
split_without_punctuation2("Hey, you -- what are you doing?!")
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
It will also naturally leave contractions and hyphenated words intact. You can always use text.replace("-", " ")
to turn hyphens into spaces before the split.
For a more general solution (where you can specify the characters to eliminate), and without a list comprehension, you get:
def split_without(text: str, ignore: str) -> list:
# Split by whitespace
split_string = text.split()
# Strip any characters in the ignore string, and ignore empty strings
words = []
for word in split_string:
word = word.strip(ignore)
if word != '':
words.append(word)
return words
# Situation-specific call to general function
import string
final_text = split_without("Hey, you - what are you doing?!", string.punctuation)
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
Of course, you can always generalize the lambda function to any specified string of characters as well.
Sure, being in master
branch all you need to do is:
git merge <commit-id>
where commit-id
is hash of the last commit from newbranch
that you want to get in your master
branch.
You can find out more about any git command by doing git help <command>
. It that case it's git help merge
. And docs are saying that the last argument for merge
command is <commit>...
, so you can pass reference to any commit or even multiple commits. Though, I never did the latter myself.
This worked for for me: getResources().openRawResource(R.raw.certificate)
You should set a TimeZone in your DateFormat, otherwise it will use the default one (depending on the settings of the computer).
As for CSS, Mozilla seems to be the most friendly, especially from FF 3.5+. Webkit browsers mostly just do their own thing, and ignore any style. IE is very limited, though IE8 lets you at least style border color/width.
The following actually looks fairly nice in FF 3.5+ (picking your color preference, of course):
select {
-moz-border-radius: 4px;
-moz-box-shadow: 1px 1px 5px #cfcfcf inset;
border: 1px solid #cfcfcf;
vertical-align: middle;
background-color: transparent;
}
option {
background-color: #fef5e6;
border-bottom: 1px solid #ebdac0;
border-right: 1px solid #d6bb86;
border-left: 1px solid #d6bb86;
}
option:hover {
cursor: pointer;
}
But when it comes to IE, you have to disable the background color on the option if you don't want it to display when the option menu isn't pulled down. And, as I said, webkit does its own thing.
Okay, redis is pretty user friendly but there are some gotchas.
Here are just some easy commands for working with redis on Ubuntu:
install:
sudo apt-get install redis-server
start with conf:
sudo redis-server <path to conf>
sudo redis-server config/redis.conf
stop with conf:
redis-ctl shutdown
(not sure how this shuts down the pid specified in the conf. Redis must save the path to the pid somewhere on boot)
log:
tail -f /var/log/redis/redis-server.log
Also, various example confs floating around online and on this site were beyond useless. The best, sure fire way to get a compatible conf is to copy-paste the one your installation is already using. You should be able to find it here:
/etc/redis/redis.conf
Then paste it at <path to conf>
, tweak as needed and you're good to go.
actually, your answer is not complete as the values also depend on the wrapping container. In case of relative or linear layouts, the values behave like this:
In case of an horizontal scroll view, your code will work.
I'd like to provide an alternate solution, a robust solution similar to what I am about to propose was required in the latest version of ggtern, since introducing the canvas rotation feature.
Basically, you need to determine the relative positions using trigonometry, by building a function which returns an element_text
object, given angle (ie degrees) and positioning (ie one of x,y,top or right) information.
#Load Required Libraries
library(ggplot2)
library(gridExtra)
#Build Function to Return Element Text Object
rotatedAxisElementText = function(angle,position='x'){
angle = angle[1];
position = position[1]
positions = list(x=0,y=90,top=180,right=270)
if(!position %in% names(positions))
stop(sprintf("'position' must be one of [%s]",paste(names(positions),collapse=", ")),call.=FALSE)
if(!is.numeric(angle))
stop("'angle' must be numeric",call.=FALSE)
rads = (angle - positions[[ position ]])*pi/180
hjust = 0.5*(1 - sin(rads))
vjust = 0.5*(1 + cos(rads))
element_text(angle=angle,vjust=vjust,hjust=hjust)
}
Frankly, in my opinion, I think that an 'auto' option should be made available in ggplot2
for the hjust
and vjust
arguments, when specifying the angle, anyway, lets demonstrate how the above works.
#Demonstrate Usage for a Variety of Rotations
df = data.frame(x=0.5,y=0.5)
plots = lapply(seq(0,90,length.out=4),function(a){
ggplot(df,aes(x,y)) +
geom_point() +
theme(axis.text.x = rotatedAxisElementText(a,'x'),
axis.text.y = rotatedAxisElementText(a,'y')) +
labs(title = sprintf("Rotated %s",a))
})
grid.arrange(grobs=plots)
Which produces the following:
Since you need to match content without including it in the result (must
match name="
but it's not part of the desired result) some form of
zero-width matching or group capturing is required. This can be done
easily with the following tools:
With Perl you could use the n
option to loop line by line and print
the content of a capturing group if it matches:
perl -ne 'print "$1\n" if /name="(.*?)"/' filename
If you have an improved version of grep, such as GNU grep, you may have
the -P
option available. This option will enable Perl-like regex,
allowing you to use \K
which is a shorthand lookbehind. It will reset
the match position, so anything before it is zero-width.
grep -Po 'name="\K.*?(?=")' filename
The o
option makes grep print only the matched text, instead of the
whole line.
Another way is to use a text editor directly. With Vim, one of the
various ways of accomplishing this would be to delete lines without
name=
and then extract the content from the resulting lines:
:v/.*name="\v([^"]+).*/d|%s//\1
If you don't have access to these tools, for some reason, something similar could be achieved with standard grep. However, without the look around it will require some cleanup later:
grep -o 'name="[^"]*"' filename
In all of the commands above the results will be sent to stdout
. It's
important to remember that you can always save them by piping it to a
file by appending:
> result
to the end of the command.
System.Diagnostics.Process.Start("http://www.webpage.com");
One of many ways.
You can simply write :
class A(object):
def __init__(self):
print "Initialiser A was called"
class B(A):
def __init__(self):
A.__init__(self)
# A.__init__(self,<parameters>) if you want to call with parameters
print "Initialiser B was called"
class C(B):
def __init__(self):
# A.__init__(self) # if you want to call most super class...
B.__init__(self)
print "Initialiser C was called"
By default, the classes in the csv
module use Windows-style line terminators (\r\n
) rather than Unix-style (\n
). Could this be what’s causing the apparent double line breaks?
If so, you can override it in the DictWriter
constructor:
output = csv.DictWriter(open('file3.csv','w'), delimiter=',', lineterminator='\n', fieldnames=headers)
Deleting client ID and creating new one a couple of times worked for me.
You can review the emulator issues on the Google I/O 2011: Android Development Tools talk, starting a 0:40:20.
The emulator runs slowly because the complete Android environment is running on emulated hardware and the instructions are executed on an emulated ARM processor as well.
The main choking point is rendering since it's not running on any dedicated hardware but it's actually being performed through software rendering. Lowering the screen size will drastically improve emulator performance. Getting more/faster memory isn't going to help.
They've mentioned, at the time, that they're developing an interface that would allow the emulator to pipe certain instructions through the host hardware, so eventually, you'll be able to leverage emulator performances with the raw power of desktop hardware.
Do you mean why doesn't the language support multithreading or why don't JavaScript engines in browsers support multithreading?
The answer to the first question is that JavaScript in the browser is meant to be run in a sandbox and in a machine/OS-independent way, to add multithreading support would complicate the language and tie the language too closely to the OS.
In AndroidManifest.xml
, set android:theme="@android:style/Theme.NoTitleBar.Fullscreen"
in application
tag.
Individual activities can override the default by setting their own theme attributes.
As Django documentation says:
prefetch_related()
Returns a QuerySet that will automatically retrieve, in a single batch, related objects for each of the specified lookups.
This has a similar purpose to select_related, in that both are designed to stop the deluge of database queries that is caused by accessing related objects, but the strategy is quite different.
select_related works by creating an SQL join and including the fields of the related object in the SELECT statement. For this reason, select_related gets the related objects in the same database query. However, to avoid the much larger result set that would result from joining across a ‘many’ relationship, select_related is limited to single-valued relationships - foreign key and one-to-one.
prefetch_related, on the other hand, does a separate lookup for each relationship, and does the ‘joining’ in Python. This allows it to prefetch many-to-many and many-to-one objects, which cannot be done using select_related, in addition to the foreign key and one-to-one relationships that are supported by select_related. It also supports prefetching of GenericRelation and GenericForeignKey, however, it must be restricted to a homogeneous set of results. For example, prefetching objects referenced by a GenericForeignKey is only supported if the query is restricted to one ContentType.
More information about this: https://docs.djangoproject.com/en/2.2/ref/models/querysets/#prefetch-related
It works in my react project:
import FileSaver from 'file-saver';
// ...
onTestSaveFile() {
var blob = new Blob(["Hello, world!"], {type: "text/plain;charset=utf-8"});
FileSaver.saveAs(blob, "hello world.txt");
}
An intuitive script: dplyr::select_if(~!all(is.na(.)))
. It literally keeps only not-all-elements-missing columns. (to delete all-element-missing columns).
> df <- data.frame( id = 1:10 , nas = rep( NA , 10 ) , vals = sample( c( 1:3 , NA ) , 10 , repl = TRUE ) )
> df %>% glimpse()
Observations: 10
Variables: 3
$ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
$ nas <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
$ vals <int> NA, 1, 1, NA, 1, 1, 1, 2, 3, NA
> df %>% select_if(~!all(is.na(.)))
id vals
1 1 NA
2 2 1
3 3 1
4 4 NA
5 5 1
6 6 1
7 7 1
8 8 2
9 9 3
10 10 NA
If you just use round then the two end numbers (1 and 9) will occur less frequently, to get an even distribution of integers between 1 and 9 then:
SELECT MOD(Round(DBMS_RANDOM.Value(1, 99)), 9) + 1 FROM DUAL
Map:
Map transformation.
The map works on a single Row at a time.
Map returns after each input Row.
The map doesn’t hold the output result in Memory.
Map no way to figure out then to end the service.
// map example
val dfList = (1 to 100) toList
val df = dfList.toDF()
val dfInt = df.map(x => x.getInt(0)+2)
display(dfInt)
MapPartition:
MapPartition transformation.
MapPartition works on a partition at a time.
MapPartition returns after processing all the rows in the partition.
MapPartition output is retained in memory, as it can return after processing all the rows in a particular partition.
MapPartition service can be shut down before returning.
// MapPartition example
Val dfList = (1 to 100) toList
Val df = dfList.toDF()
Val df1 = df.repartition(4).rdd.mapPartition((int) => Iterator(itr.length))
Df1.collec()
//display(df1.collect())
For more details, please refer to the Spark map vs mapPartitions transformation article.
Hope this is helpful!
In netbeans, you can right click the mouse and then save as a .txt file. Then, based on the created .txt file, you can convert to the file in any format you want to get.
Use iloc to access by position (rather than label):
In [11]: df = pd.DataFrame([[1, 2], [3, 4]], ['a', 'b'], ['A', 'B'])
In [12]: df
Out[12]:
A B
a 1 2
b 3 4
In [13]: df.iloc[0] # first row in a DataFrame
Out[13]:
A 1
B 2
Name: a, dtype: int64
In [14]: df['A'].iloc[0] # first item in a Series (Column)
Out[14]: 1
As you are aware, everything passed as email message has to be textualized.
<img />
tag is sufficient (the url of the image must be linked to a Source ID).A Typical email example will look like this:
From: foo1atbar.net
To: foo2atbar.net
Subject: A simple example
Mime-Version: 1.0
Content-Type: multipart/related; boundary="boundary-example"; type="text/html"
--boundary-example
Content-Type: text/html; charset="US-ASCII"
... text of the HTML document, which might contain a URI
referencing a resource in another body part, for example
through a statement such as:
<IMG SRC="cid:foo4atfoo1atbar.net" ALT="IETF logo">
--boundary-example
Content-Location: CID:somethingatelse ; this header is disregarded
Content-ID: <foo4atfoo1atbar.net>
Content-Type: IMAGE/GIF
Content-Transfer-Encoding: BASE64
R0lGODlhGAGgAPEAAP/////ZRaCgoAAAACH+PUNv
cHlyaWdodCAoQykgMTk5LiBVbmF1dGhvcml6ZWQgZHV
wbGljYXRpb24gcHJvaGliaXRlZC4A etc...
--boundary-example--
As you can see, the Content-ID: <foo4atfoo1atbar.net>
ID is matched to the <IMG>
at SRC="cid:foo4atfoo1atbar.net"
. That way, the client browser will render your image as a content and not as an attachement.
Hope this helps.
This an old question and depends more upon when you need to start your routines. Since no one wants a null reference exception it is always best to check for null first then use as needed; that alone may save you a lot of grief.
The most common reason for this type of question is when a container or custom control type attempts to access properties initialized outside of a custom class where those properties have not yet been initialized thus potentially causing null values to populate and can even cause a null reference exceptions on object types. It means your class is running before it is fully initialized - before you have finished setting your properties etc. Another possible reason for this type of question is when to perform custom graphics.
To best answer the question about when to start executing code following the form load event is to monitor the WM_Paint message or hook directly in to the paint event itself. Why? The paint event only fires when all modules have fully loaded with respect to your form load event. Note: This.visible == true is not always true when it is set true so it is not used at all for this purpose except to hide a form.
The following is a complete example of how to start executing you code following the form load event. It is recommended that you do not unnecessarily tie up the paint message loop so we'll create an event that will start executing your code outside that loop.
using System.Windows.Forms;
namespace MyProgramStartingPlaceExample {
/// <summary>
/// Main UI form object
/// </summary>
public class Form1 : Form
{
/// <summary>
/// Main form load event handler
/// </summary>
public Form1()
{
// Initialize ONLY. Setup your controls and form parameters here. Custom controls should wait for "FormReady" before starting up too.
this.Text = "My Program title before form loaded";
// Size need to see text. lol
this.Width = 420;
// Setup the sub or fucntion that will handle your "start up" routine
this.StartUpEvent += StartUPRoutine;
// Optional: Custom control simulation startup sequence:
// Define your class or control in variable. ie. var MyControlClass new CustomControl;
// Setup your parameters only. ie. CustomControl.size = new size(420, 966); Do not validate during initialization wait until "FormReady" is set to avoid possible null values etc.
// Inside your control or class have a property and assign it as bool FormReady - do not validate anything until it is true and you'll be good!
}
/// <summary>
/// The main entry point for the application which sets security permissions when set.
/// </summary>
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
#region "WM_Paint event hooking with StartUpEvent"
//
// Create a delegate for our "StartUpEvent"
public delegate void StartUpHandler();
//
// Create our event handle "StartUpEvent"
public event StartUpHandler StartUpEvent;
//
// Our FormReady will only be set once just he way we intendded
// Since it is a global variable we can poll it else where as well to determine if we should begin code execution !!
bool FormReady;
//
// The WM_Paint message handler: Used mostly to paint nice things to controls and screen
protected override void OnPaint(PaintEventArgs e)
{
// Check if Form is ready for our code ?
if (FormReady == false) // Place a break point here to see the initialized version of the title on the form window
{
// We only want this to occur once for our purpose here.
FormReady = true;
//
// Fire the start up event which then will call our "StartUPRoutine" below.
StartUpEvent();
}
//
// Always call base methods unless overriding the entire fucntion
base.OnPaint(e);
}
#endregion
#region "Your StartUp event Entry point"
//
// Begin executuing your code here to validate properties etc. and to run your program. Enjoy!
// Entry point is just following the very first WM_Paint message - an ideal starting place following form load
void StartUPRoutine()
{
// Replace the initialized text with the following
this.Text = "Your Code has executed after the form's load event";
//
// Anyway this is the momment when the form is fully loaded and ready to go - you can also use these methods for your classes to synchronize excecution using easy modifications yet here is a good starting point.
// Option: Set FormReady to your controls manulaly ie. CustomControl.FormReady = true; or subscribe to the StartUpEvent event inside your class and use that as your entry point for validating and unleashing its code.
//
// Many options: The rest is up to you!
}
#endregion
}
}
might fail working with link_directories, then add each static library like following:
target_link_libraries(foo /path_to_static_library/libbar.a)
Get yourself a 64-bit JVM from Oracle.
What you put inside the </dependencies>
tag of the root pom will be included by all child modules of the root pom. If all your modules use that dependency, this is the way to go.
However, if only 3 out of 10 of your child modules use some dependency, you do not want this dependency to be included in all your child modules. In that case, you can just put the dependency inside the </dependencyManagement>
. This will make sure that any child module that needs the dependency must declare it in their own pom file, but they will use the same version of that dependency as specified in your </dependencyManagement>
tag.
You can also use the </dependencyManagement>
to modify the version used in transitive dependencies, because the version declared in the upper most pom file is the one that will be used. This can be useful if your project A includes an external project B v1.0 that includes another external project C v1.0. Sometimes it happens that a security breach is found in project C v1.0 which is corrected in v1.1, but the developers of B are slow to update their project to use v1.1 of C. In that case, you can simply declare a dependency on C v1.1 in your project's root pom inside `, and everything will be good (assuming that B v1.0 will still be able to compile with C v1.1).
Use the Figure.savefig()
method, like so:
ax = s.hist() # s is an instance of Series
fig = ax.get_figure()
fig.savefig('/path/to/figure.pdf')
It doesn't have to end in pdf
, there are many options. Check out the documentation.
Alternatively, you can use the pyplot
interface and just call the savefig
as a function to save the most recently created figure:
import matplotlib.pyplot as plt
s.hist()
plt.savefig('path/to/figure.pdf') # saves the current figure
According to this: http://www.vistax64.com/vista-installation-setup/33219-regsvr32-error-0x80004005.html
Run it in a elevated command prompt.
Similar to answers above, but for python3, arguably readable and arguably extensible:
from pathlib import Path
class DisplayablePath(object):
display_filename_prefix_middle = '+--'
display_filename_prefix_last = '+--'
display_parent_prefix_middle = ' '
display_parent_prefix_last = '¦ '
def __init__(self, path, parent_path, is_last):
self.path = Path(str(path))
self.parent = parent_path
self.is_last = is_last
if self.parent:
self.depth = self.parent.depth + 1
else:
self.depth = 0
@property
def displayname(self):
if self.path.is_dir():
return self.path.name + '/'
return self.path.name
@classmethod
def make_tree(cls, root, parent=None, is_last=False, criteria=None):
root = Path(str(root))
criteria = criteria or cls._default_criteria
displayable_root = cls(root, parent, is_last)
yield displayable_root
children = sorted(list(path
for path in root.iterdir()
if criteria(path)),
key=lambda s: str(s).lower())
count = 1
for path in children:
is_last = count == len(children)
if path.is_dir():
yield from cls.make_tree(path,
parent=displayable_root,
is_last=is_last,
criteria=criteria)
else:
yield cls(path, displayable_root, is_last)
count += 1
@classmethod
def _default_criteria(cls, path):
return True
@property
def displayname(self):
if self.path.is_dir():
return self.path.name + '/'
return self.path.name
def displayable(self):
if self.parent is None:
return self.displayname
_filename_prefix = (self.display_filename_prefix_last
if self.is_last
else self.display_filename_prefix_middle)
parts = ['{!s} {!s}'.format(_filename_prefix,
self.displayname)]
parent = self.parent
while parent and parent.parent is not None:
parts.append(self.display_parent_prefix_middle
if parent.is_last
else self.display_parent_prefix_last)
parent = parent.parent
return ''.join(reversed(parts))
Example usage:
paths = DisplayablePath.make_tree(Path('doc'))
for path in paths:
print(path.displayable())
Example output:
doc/
+-- _static/
¦ +-- embedded/
¦ ¦ +-- deep_file
¦ ¦ +-- very/
¦ ¦ +-- deep/
¦ ¦ +-- folder/
¦ ¦ +-- very_deep_file
¦ +-- less_deep_file
+-- about.rst
+-- conf.py
+-- index.rst
If you are running on an NGINX / PHP-FPM stack, your best bet is to probably just reload php-fpm
service php-fpm reload
(or whatever your reload command may be on your system)
iGoogle gadgets have to actively implement resizing, so my guess is in a cross-domain model you can't do this without the remote content taking part in some way. If your content can send a message with the new size to the container page using typical cross-domain communication techniques, then the rest is simple.
In my case, the cause of such Apple Mach-O Linker error was source code file (.m) inclusion to resource bundle target.
Verify that recently created .m file is not included in a bundle: select the file in project navigator, open file inspector and make sure that resource bundle checkbox is deselected in Target Membership
section.
You can use a pseudo-element to insert that character before each list item:
ul {_x000D_
list-style: none;_x000D_
}_x000D_
_x000D_
ul li:before {_x000D_
content: '?';_x000D_
}
_x000D_
<ul>_x000D_
<li>this is my text</li>_x000D_
<li>this is my text</li>_x000D_
<li>this is my text</li>_x000D_
<li>this is my text</li>_x000D_
<li>this is my text</li>_x000D_
</ul>
_x000D_
Does the parent have a height? If you set the parents height like so.
div.parent { height: 300px };
Then you can make the child stretch to the full height like this.
div.child-right { height: 100% };
EDIT
The easiest way to redirect to HTTPS with Laravel is by using .htaccess
so all you have to do is add the following lines to your .htaccess file and you are good to go.
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
Make sure you add it before the existing(*default)
code found in the .htaccess file, else HTTPS will not work.
This is because the existing(default) code already handles a redirect which redirects all traffic to the home page where the route then takes over depending on your URL
so putting the code first means that .htaccess will first redirect all traffic to https before the route takes over
$objectQuery = "SELECT table_master.*, ((acos(sin((" . $latitude . "*pi()/180)) * sin((`latitude`*pi()/180))+cos((" . $latitude . "*pi()/180)) * cos((`latitude`*pi()/180)) * cos(((" . $longitude . "- `longtude`)* pi()/180))))*180/pi())*60*1.1515 as distance FROM `table_post_broadcasts` JOIN table_master ON table_post_broadcasts.master_id = table_master.id WHERE table_master.type_of_post ='type' HAVING distance <='" . $Radius . "' ORDER BY distance asc";
For environment variable in Maven, you can set below.
http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#environmentVariables http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html#environmentVariables
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
...
<configuration>
<includes>
...
</includes>
<environmentVariables>
<WSNSHELL_HOME>conf</WSNSHELL_HOME>
</environmentVariables>
</configuration>
</plugin>
Do as follow,
commit
CMD+A
that you want to delete or discard
Right click
on the selected uncommitted files that you want to deleteRemove
from the drop-down listWhen it comes to Google Analytics I found raik's answer at Secure Google tracking cookies very useful. It set secure and samesite to a value.
ga('create', 'UA-XXXXX-Y', {
cookieFlags: 'max-age=7200;secure;samesite=none'
});
Also more info in this blog post
Try this way.
var socket = io.connect('http://...');
console.log(socket.Socket.sessionid);
Here is a 2.0.0 version for the current document in Chrome w/ keyboard shortcut:
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "Chrome",
"type": "process",
"command": "chrome.exe",
"windows": {
"command": "C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe"
},
"args": [
"${file}"
],
"problemMatcher": []
}
]
}
keybindings.json
:
{
"key": "ctrl+g",
"command": "workbench.action.tasks.runTask",
"args": "Chrome"
}
For running on a webserver:
https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer
Another approach would be to create an embedded server to "host" your servlet, allowing you to write calls against it with libraries meant to make calls to actual servers (the usefulness of this approach somewhat depends on how easily you can make "legitimate" programatic calls to the server - I was testing a JMS (Java Messaging Service) access point, for which clients abound).
There are a couple of different routes you can go - the usual two are tomcat and jetty.
Warning: something to be mindful of when choosing the server to embed is the version of servlet-api you are using (the library which provides classes like HttpServletRequest). If you are using 2.5, I found Jetty 6.x to work well (which is the example I'll give below). If you're using servlet-api 3.0, the tomcat-7 embedded stuff seems to be a good option, however I had to abandon my attempt to use it, as the application I was testing used servlet-api 2.5. Trying to mix the two will result in NoSuchMethod and other such exceptions when attempting to configure or start the server.
You can set up such a server like this (Jetty 6.1.26, servlet-api 2.5):
public void startServer(int port, Servlet yourServletInstance){
Server server = new Server(port);
Context root = new Context(server, "/", Context.SESSIONS);
root.addServlet(new ServletHolder(yourServletInstance), "/servlet/context/path");
//If you need the servlet context for anything, such as spring wiring, you coudl get it like this
//ServletContext servletContext = root.getServletContext();
server.start();
}
A more "precise" calculation. That is, the number of week/month/quarter/year for a non-complete week/month/quarter/year is the fraction of calendar days in that week/month/quarter/year. For example, the number of months between 2016-02-22 and 2016-03-31 is 8/29 + 31/31 = 1.27586
explanation inline with code
#' Calculate precise number of periods between 2 dates
#'
#' @details The number of week/month/quarter/year for a non-complete week/month/quarter/year
#' is the fraction of calendar days in that week/month/quarter/year.
#' For example, the number of months between 2016-02-22 and 2016-03-31
#' is 8/29 + 31/31 = 1.27586
#'
#' @param startdate start Date of the interval
#' @param enddate end Date of the interval
#' @param period character. It must be one of 'day', 'week', 'month', 'quarter' and 'year'
#'
#' @examples
#' identical(numPeriods(as.Date("2016-02-15"), as.Date("2016-03-31"), "month"), 15/29 + 1)
#' identical(numPeriods(as.Date("2016-02-15"), as.Date("2016-03-31"), "quarter"), (15 + 31)/(31 + 29 + 31))
#' identical(numPeriods(as.Date("2016-02-15"), as.Date("2016-03-31"), "year"), (15 + 31)/366)
#'
#' @return exact number of periods between
#'
numPeriods <- function(startdate, enddate, period) {
numdays <- as.numeric(enddate - startdate) + 1
if (grepl("day", period, ignore.case=TRUE)) {
return(numdays)
} else if (grepl("week", period, ignore.case=TRUE)) {
return(numdays / 7)
}
#create a sequence of dates between start and end dates
effDaysinBins <- cut(seq(startdate, enddate, by="1 day"), period)
#use the earliest start date of the previous bins and create a breaks of periodic dates with
#user's period interval
intervals <- seq(from=as.Date(min(levels(effDaysinBins)), "%Y-%m-%d"),
by=paste("1",period),
length.out=length(levels(effDaysinBins))+1)
#create a sequence of dates between the earliest interval date and last date of the interval
#that contains the enddate
allDays <- seq(from=intervals[1],
to=intervals[intervals > enddate][1] - 1,
by="1 day")
#bin all days in the whole period using previous breaks
allDaysInBins <- cut(allDays, intervals)
#calculate ratio of effective days to all days in whole period
sum( tabulate(effDaysinBins) / tabulate(allDaysInBins) )
} #numPeriods
Please let me know if you find more boundary cases where the above solution does not work.
When you cherry-pick, it creates a new commit with a new SHA. If you do:
git cherry-pick -x <sha>
then at least you'll get the commit message from the original commit appended to your new commit, along with the original SHA, which is very useful for tracking cherry-picks.
A previous answer using LPAD()
is optimal. However, in the event you want to do special or advanced processing, here is a method that allows more iterative control over the padding. Also serves as an example using other constructs to achieve the same thing.
UPDATE
mytable
SET
mycolumn = CONCAT(
REPEAT(
"0",
8 - LENGTH(mycolumn)
),
mycolumn
)
WHERE
LENGTH(mycolumn) < 8;
For full image background, check this:
html {
background: url(images/bg.jpg) no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
datetime.datetime.fromtimestamp()
is correct, except you are probably having timestamp in miliseconds (like in JavaScript), but fromtimestamp()
expects Unix timestamp, in seconds.
Do it like that:
>>> import datetime
>>> your_timestamp = 1331856000000
>>> date = datetime.datetime.fromtimestamp(your_timestamp / 1e3)
and the result is:
>>> date
datetime.datetime(2012, 3, 16, 1, 0)
Does it answer your question?
EDIT: J.F. Sebastian correctly suggested to use true division by 1e3
(float 1000
). The difference is significant, if you would like to get precise results, thus I changed my answer. The difference results from the default behaviour of Python 2.x, which always returns int
when dividing (using /
operator) int
by int
(this is called floor division). By replacing the divisor 1000
(being an int
) with the 1e3
divisor (being representation of 1000
as float) or with float(1000)
(or 1000.
etc.), the division becomes true division. Python 2.x returns float
when dividing int
by float
, float
by int
, float
by float
etc. And when there is some fractional part in the timestamp passed to fromtimestamp()
method, this method's result also contains information about that fractional part (as the number of microseconds).
.joins works as database join and it joins two or more table and fetch selected data from backend(database).
.includes work as left join of database. It loaded all the records of left side, does not have relevance of right hand side model. It is used to eager loading because it load all associated object in memory. If we call associations on include query result then it does not fire a query on database, It simply return data from memory because it have already loaded data in memory.
try to change your method if you need to loop!
within the parent stored procedure, create a #temp table that contains the data that you need to process. Call the child stored procedure, the #temp table will be visible and you can process it, hopefully working with the entire set of data and without a cursor or loop.
this really depends on what this child stored procedure is doing. If you are UPDATE-ing, you can "update from" joining in the #temp table and do all the work in one statement without a loop. The same can be done for INSERT and DELETEs. If you need to do multiple updates with IFs you can convert those to multiple UPDATE FROM
with the #temp table and use CASE statements or WHERE conditions.
When working in a database try to lose the mindset of looping, it is a real performance drain, will cause locking/blocking and slow down the processing. If you loop everywhere, your system will not scale very well, and will be very hard to speed up when users start complaining about slow refreshes.
Post the content of this procedure you want call in a loop, and I'll bet 9 out of 10 times, you could write it to work on a set of rows.
Arbitrarily choosing to keep the minimum PIC_ID. Also, avoid using the implicit join syntax.
SELECT U.NAME, MIN(P.PIC_ID)
FROM USERS U
INNER JOIN POSTINGS P1
ON U.EMAIL_ID = P1.EMAIL_ID
INNER JOIN PICTURES P
ON P1.PIC_ID = P.PIC_ID
WHERE P.CAPTION LIKE '%car%'
GROUP BY U.NAME;
From the @LearnRPG answer but with 1.0:
// send to current request socket client
socket.emit('message', "this is a test");
// sending to all clients, include sender
io.sockets.emit('message', "this is a test"); //still works
//or
io.emit('message', 'this is a test');
// sending to all clients except sender
socket.broadcast.emit('message', "this is a test");
// sending to all clients in 'game' room(channel) except sender
socket.broadcast.to('game').emit('message', 'nice game');
// sending to all clients in 'game' room(channel), include sender
// docs says "simply use to or in when broadcasting or emitting"
io.in('game').emit('message', 'cool game');
// sending to individual socketid, socketid is like a room
socket.broadcast.to(socketid).emit('message', 'for your eyes only');
To answer @Crashalot comment, socketid
comes from:
var io = require('socket.io')(server);
io.on('connection', function(socket) { console.log(socket.id); })
The CONVERT
function helps.Check this:
declare @erro_event_timestamp as Timestamp;
set @erro_event_timestamp = CONVERT(Timestamp, '2020-07-06 05:19:44.380', 121);
The magic number 121 I found here: https://www.w3schools.com/SQL/func_sqlserver_convert.asp
Port 80 might be busy with other application like IIS. If you don't want to stop it, you can change the apache port. Here is the way..
httpd.conf
.Listen 80
Listen 1234
)Visual Studio 2015 doesn't install C++ by default. You have to rerun the setup, select Modify and then check Programming Language -> C++
var span_Text = document.getElementById("span_Id").innerText;_x000D_
_x000D_
console.log(span_Text)
_x000D_
<span id="span_Id">I am the Text </span>
_x000D_
What's even easier is to just use the BackgroundWorker control...
from selenium import webdriver
from selenium.webdriver.common.proxy import *
myProxy = "86.111.144.194:3128"
proxy = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': myProxy,
'ftpProxy': myProxy,
'sslProxy': myProxy,
'noProxy':''})
driver = webdriver.Firefox(proxy=proxy)
driver.set_page_load_timeout(30)
driver.get('http://whatismyip.com')
Although it is an old question already answered, maybe those next examples can complement the accepted answer and they can be useful for some new programmers in Android as I am.
Option 1 - "addToBackStack()" is never used
Case 1A - adding, removing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
add Fragment C : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
remove Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Fragment B is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy()
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() App is closed, nothing is visible
Case 1B - adding, replacing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
(replace Fragment C)
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment C : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy()
Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() App is closed, nothing is visible
Option 2 - "addToBackStack()" is always used
Case 2A - adding, removing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
add Fragment C : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
remove Fragment C : onPause() - onStop() - onDestroyView() Fragment B is visible
(Back button clicked)
Fragment C : onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
(Back button clicked)
Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Fragment B is visible
(Back button clicked)
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Fragment A is visible
(Back button clicked)
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Activity is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy() App is closed, nothing is visible
Case 2B - adding, replacing, removing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
(replace Fragment C)
Fragment B : onPause() - onStop() - onDestroyView()
Fragment A : onPause() - onStop() - onDestroyView()
Fragment C : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
remove Fragment C : onPause() - onStop() - onDestroyView() Activity is visible
(Back button clicked)
Fragment C : onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
(Back button clicked)
Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment A : onCreateView() - onActivityCreated() - onStart() - onResume()
Fragment B : onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
(Back button clicked)
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Fragment A is visible
(Back button clicked)
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Activity is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy() App is closed, nothing is visible
Option 3 - "addToBackStack()" is not used always (in the below examples, w/o indicates that it is not used)
Case 3A - adding, removing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B w/o: onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
add Fragment C w/o: onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
remove Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Fragment B is visible
(Back button clicked)
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Activity is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy() App is closed, nothing is visible
Case 3B - adding, replacing, removing, and clicking Back button
Activity : onCreate() - onStart() - onResume() Activity is visible
add Fragment A : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
add Fragment B w/o: onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment B is visible
(replace Fragment C)
Fragment B : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment A : onPause() - onStop() - onDestroyView()
Fragment C : onAttach() - onCreate() - onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
remove Fragment C : onPause() - onStop() - onDestroyView() Activity is visible
(Back button clicked)
Fragment C : onCreateView() - onActivityCreated() - onStart() - onResume() Fragment C is visible
(Back button clicked)
Fragment C : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach()
Fragment A : onCreateView() - onActivityCreated() - onStart() - onResume() Fragment A is visible
(Back button clicked)
Fragment A : onPause() - onStop() - onDestroyView() - onDestroy() - onDetach() Activity is visible
(Back button clicked)
Activity : onPause() - onStop() - onDestroy() App is closed, nothing is visible
document.getElementsByName("name")
will get several elements called by same name .
document.getElementsByName("name")[Number]
will get one of them.
document.getElementsByName("name")[Number].value
will get the value of paticular element.
The key of this question is this:
The name of elements is not unique, it is usually used for several input elements in the form.
On the other hand, the id of the element is unique, which is the only definition for a particular element in a html file.
Thread.currentThread().isInterrupted() is superbly working. but this code is only pause the timer.
This code is stop and reset the thread timer. h1 is handler name. This code is add on inside your button click listener. w_h =minutes w_m =milli sec i=counter
i=0;
w_h = 0;
w_m = 0;
textView.setText(String.format("%02d", w_h) + ":" + String.format("%02d", w_m));
hl.removeCallbacksAndMessages(null);
Thread.currentThread().isInterrupted();
}
});
}`
Django 1.10 no longer allows you to specify views as a string (e.g. 'myapp.views.home'
) in your URL patterns.
The solution is to update your urls.py
to include the view callable. This means that you have to import the view in your urls.py
. If your URL patterns don't have names, then now is a good time to add one, because reversing with the dotted python path no longer works.
from django.conf.urls import include, url
from django.contrib.auth.views import login
from myapp.views import home, contact
urlpatterns = [
url(r'^$', home, name='home'),
url(r'^contact/$', contact, name='contact'),
url(r'^login/$', login, name='login'),
]
If there are many views, then importing them individually can be inconvenient. An alternative is to import the views module from your app.
from django.conf.urls import include, url
from django.contrib.auth import views as auth_views
from myapp import views as myapp_views
urlpatterns = [
url(r'^$', myapp_views.home, name='home'),
url(r'^contact/$', myapp_views.contact, name='contact'),
url(r'^login/$', auth_views.login, name='login'),
]
Note that we have used as myapp_views
and as auth_views
, which allows us to import the views.py
from multiple apps without them clashing.
See the Django URL dispatcher docs for more information about urlpatterns
.
Also you can use Lodash to direct convert object to array:
_.toArray({0:{a:4},1:{a:6},2:{a:5}})
[{a:4},{a:6},{a:5}]
In your case:
_.toArray(subjects).map((subject, i) => (
<li className="travelcompany-input" key={i}>
<span className="input-label">Name: {subject[name]}</span>
</li>
))}
I use 3 lines to do this job, so consider $s as your "stuff"...
$s=str_replace(chr(10),'',$s);
$s=str_replace(chr(13),'',$s);
$s=str_replace("\r\n"),'',$s);
Instead of taking the HttpServletRequest
object in every method, keep in controllers' context by auto-wiring via the constructor. Then you can access from all methods of the controller.
public class OAuth2ClientController {
@Autowired
private OAuth2ClientService oAuth2ClientService;
private HttpServletRequest request;
@Autowired
public OAuth2ClientController(HttpServletRequest request) {
this.request = request;
}
@RequestMapping(method = RequestMethod.POST)
public ResponseEntity<String> createClient(@RequestBody OAuth2Client client) {
System.out.println(request.getRequestURI());
System.out.println(request.getHeader("Content-Type"));
return ResponseEntity.ok();
}
}
Real life example; find all but not current user:
var players = Players.find({ my_x: player.my_x, my_y: player.my_y, userId: {$ne: Meteor.userId()} });
Looks like you are missing the printer name, driver, and port - in that order. Your final command should resemble:
AcroRd32.exe /t <file.pdf> <printer_name> <printer_driver> <printer_port>
For example:
"C:\Program Files (x86)\Adobe\Reader 11.0\Reader\AcroRd32.exe" /t "C:\Folder\File.pdf" "Brother MFC-7820N USB Printer" "Brother MFC-7820N USB Printer" "IP_192.168.10.110"
Note: To find the printer information, right click your printer and choose properties. In my case shown above, the printer name and driver name matched - but your information may differ.
EDIT: I found this link. Hope it helps. http://viralpatel.net/blogs/2011/02/clearable-textbox-jquery.html
You have mentioned you want it on the right of the input text. So, the best way would be to create an image next to the input box. If you are looking something inside the box, you can use background image but you may not be able to write a script to clear the box.
So, insert and image and write a JavaScript code to clear the textbox.
I have this in my .gitconfig file. It is still a draft, but proved useful as of now. It helps me to always reattach the submodules to their branch.
[alias]
######################
#
#Submodules aliases
#
######################
#git sm-trackbranch : places all submodules on their respective branch specified in .gitmodules
#This works if submodules are configured to track a branch, i.e if .gitmodules looks like :
#[submodule "my-submodule"]
# path = my-submodule
# url = [email protected]/my-submodule.git
# branch = my-branch
sm-trackbranch = "! git submodule foreach -q --recursive 'branch=\"$(git config -f $toplevel/.gitmodules submodule.$name.branch)\"; git checkout $branch'"
#sm-pullrebase :
# - pull --rebase on the master repo
# - sm-trackbranch on every submodule
# - pull --rebase on each submodule
#
# Important note :
#- have a clean master repo and subrepos before doing this !
#- this is *not* equivalent to getting the last committed
# master repo + its submodules: if some submodules are tracking branches
# that have evolved since the last commit in the master repo,
# they will be using those more recent commits !
#
# (Note : On the contrary, git submodule update will stick
#to the last committed SHA1 in the master repo)
#
sm-pullrebase = "! git pull --rebase; git submodule update; git sm-trackbranch ; git submodule foreach 'git pull --rebase' "
# git sm-diff will diff the master repo *and* its submodules
sm-diff = "! git diff && git submodule foreach 'git diff' "
#git sm-push will ask to push also submodules
sm-push = push --recurse-submodules=on-demand
#git alias : list all aliases
#useful in order to learn git syntax
alias = "!git config -l | grep alias | cut -c 7-"
I'm using node modules copy-to module to create a single file to require all the files in our NodeJS-based system.
The code for our utility file looks like this:
/**
* Module dependencies.
*/
var copy = require('copy-to');
copy(require('./module1'))
.and(require('./module2'))
.and(require('./module3'))
.to(module.exports);
In all of the files, most functions are written as exports, like so:
exports.function1 = function () { // function contents };
exports.function2 = function () { // function contents };
exports.function3 = function () { // function contents };
So, then to use any function from a file, you just call:
var utility = require('./utility');
var response = utility.function2(); // or whatever the name of the function is
Here is the solution I was looking for. If you would like to create List2 that contains the difference of the number elements in List1.
list1 = [12, 15, 22, 54, 21, 68, 9, 73, 81, 34, 45]
list2 = []
for i in range(1, len(list1)):
change = list1[i] - list1[i-1]
list2.append(change)
Note that while len(list1)
is 11 (elements), len(list2)
will only be 10 elements because we are starting our for loop from element with index 1 in list1 not from element with index 0 in list1
As any static method is part of class not instance so it is not possible to override static method
You can do this to only monitor own properties of the object:
var arr = [];
for (var key in p) {
if (p.hasOwnProperty(key)) {
arr.push(p[key]);
}
}
As with all technologies, it has its ups and downs. If you are using an iframe to get around a properly developed site, then of course it is bad practice. However sometimes an iframe is acceptable.
One of the main problems with an iframe has to do with bookmarks and navigation. If you are using it to simply embed a page inside your content, I think that is fine. That is what an iframe is for.
However I've seen iframes abused as well. It should never be used as an integral part of your site, but as a piece of content within a site.
Usually, if you can do it without an iframe, that is a better option. I'm sure others here may have more information or more specific examples, it all comes down to the problem you are trying to solve.
With that said, if you are limited to HTML and have no access to a backend like PHP or ASP.NET etc, sometimes an iframe is your only option.
You should only need to do one of:
inline
(or inline-block
)float
left or rightYou should be able to adjust the height
, padding
, or margin
properties of the smaller heading to compensate for its positioning. I recommend setting both headings to have the same height
.
See this live jsFiddle for an example.
(code of the jsFiddle):
CSS
h2 {
font-size: 50px;
}
h3 {
font-size: 30px;
}
h2, h3 {
width: 50%;
height: 60px;
margin: 0;
padding: 0;
display: inline;
}?
HTML
<h2>Big Heading</h2>
<h3>Small(er) Heading</h3>
<hr />?
Here's an example you can run as a batch script (copy-paste into a .bat file), using the SQLCMD utility in Sql Server client tools:
BACKUP:
echo off
cls
echo -- BACKUP DATABASE --
set /p DATABASENAME=Enter database name:
:: filename format Name-Date (eg MyDatabase-2009.5.19.bak)
set DATESTAMP=%DATE:~-4%.%DATE:~7,2%.%DATE:~4,2%
set BACKUPFILENAME=%CD%\%DATABASENAME%-%DATESTAMP%.bak
set SERVERNAME=your server name here
echo.
sqlcmd -E -S %SERVERNAME% -d master -Q "BACKUP DATABASE [%DATABASENAME%] TO DISK = N'%BACKUPFILENAME%' WITH INIT , NOUNLOAD , NAME = N'%DATABASENAME% backup', NOSKIP , STATS = 10, NOFORMAT"
echo.
pause
RESTORE:
echo off
cls
echo -- RESTORE DATABASE --
set /p BACKUPFILENAME=Enter backup file name:%CD%\
set /p DATABASENAME=Enter database name:
set SERVERNAME=your server name here
sqlcmd -E -S %SERVERNAME% -d master -Q "ALTER DATABASE [%DATABASENAME%] SET SINGLE_USER WITH ROLLBACK IMMEDIATE"
:: WARNING - delete the database, suits me
:: sqlcmd -E -S %SERVERNAME% -d master -Q "IF EXISTS (SELECT * FROM sysdatabases WHERE name=N'%DATABASENAME%' ) DROP DATABASE [%DATABASENAME%]"
:: sqlcmd -E -S %SERVERNAME% -d master -Q "CREATE DATABASE [%DATABASENAME%]"
:: restore
sqlcmd -E -S %SERVERNAME% -d master -Q "RESTORE DATABASE [%DATABASENAME%] FROM DISK = N'%CD%\%BACKUPFILENAME%' WITH REPLACE"
:: remap user/login (http://msdn.microsoft.com/en-us/library/ms174378.aspx)
sqlcmd -E -S %SERVERNAME% -d %DATABASENAME% -Q "sp_change_users_login 'Update_One', 'login-name', 'user-name'"
sqlcmd -E -S %SERVERNAME% -d master -Q "ALTER DATABASE [%DATABASENAME%] SET MULTI_USER"
echo.
pause
You shouldn't really worry about undefined being renamed. If someone renames undefined, you will be in a lot more trouble than just a few if checks failing. If you really want to protect your code, wrap it in an IFFE (immediately invoked function expression) like this:
(function($, Backbone, _, undefined) {
//undefined is undefined here.
})(jQuery, Backbone, _);
If you're working with global variables (which is wrong already) in a browser enviroment, I'd check for undefined like this:
if(window.neverDefined === undefined) {
//Code works
}
Since global variables are a part of the window object, you can simply check against undefined instead of casting to a string and comparing strings.
On top of that, why are your variables not defined? I've seen a lot of code where they check a variables existence and perform some action based on that. Not once have I seen where this approach has been correct.
Easiest way to get basic metadata summary is to use a temp table and then use EXEC function:
SELECT * INTO #TempTable FROM TableName
EXEC [tempdb].[dbo].[sp_help] N'#TempTable'
For all columns in the table, this will give you
Column Name,
Data Type,
Computed Length,
Prec,
Scale,
Nullable,
TrimTrailingBlanks,
FixedLenNullInSource,
Collation Type
Comparing sequencially the letters that have the same position against each other.. more like how you order words in a dictionary
You can achieve similar results with pig/hive queries. The main difference lies within approach to understanding/writing/creating queries.
Pig tends to create a flow of data: small steps where in each you do some processing
Hive gives you SQL-like language to operate on your data, so transformation from RDBMS is much easier (Pig can be easier for someone who had not earlier experience with SQL)
It is also worth noting, that for Hive you can nice interface to work with this data (Beeswax for HUE, or Hive web interface), and it also gives you metastore for information about your data (schema, etc) which is useful as a central information about your data.
I use both Hive and Pig, for different queries (I use that one where I can write query faster/easier, I do it this way mostly ad-hoc queries) - they can use the same data as an input. But currently I'm doing much of my work through Beeswax.
An alternative version of the ordinal function could be as follows:
function toCardinal(num) {
var ones = num % 10;
var tens = num % 100;
if (tens < 11 || tens > 13) {
switch (ones) {
case 1:
return num + "st";
case 2:
return num + "nd";
case 3:
return num + "rd";
}
}
return num + "th";
}
The variables are named more explicitly, uses camel case convention, and might be faster.
Per the JQuery documentation, .get()
only takes the url
, data
(content), dataType
, and success
callback as its parameters. What you're really looking to do here is modify the jqXHR object before it gets sent. With .ajax()
, this is done with the beforeSend()
method. But since .get()
is a shortcut, it doesn't allow it.
It should be relatively easy to switch your .ajax()
calls to .get()
calls though. After all, .get()
is just a subset of .ajax()
, so you can probably use all the default values for .ajax()
(except, of course, for beforeSend()
).
Edit:
::Looks at Jivings' answer::
Oh yeah, forgot about the cache
parameter! While beforeSend()
is useful for adding other headers, the built-in cache
parameter is far simpler here.
Development environment for Windows you are using may contain its own tools. Visual Studio, for example, lets you detect and isolate memory leaks in your programs
Sorry for the necro answer, but maybe will need this (as I didn't found a solution for recent ffmpeg releases.
With ffmpeg 3.3.4 I found one can find with the following:
ffprobe -i video.mp4 -show_streams -hide_banner | grep "nb_frames"
At the end it will output frame count. It worked for me on videos with audio. It gives twice a "nb_frames" line, though, but the first line was the actual frame count on the videos I tested.
On Windows operating system use this instead, this works for me:
https://{Username}:{Password}@github.com/{Username}/{repo}.git
e.g.
git clone https://{Username}:{Password}@github.com/{Username}/{repo}.git
git pull https://{Username}:{Password}@github.com/{Username}/{repo}.git
git remote add origin https://{Username}:{Password}@github.com/{Username}/{repo}.git
git push origin master
Probably you haven't injected $http
service to your controller. There are several ways of doing that.
Please read this reference about DI. Then it gets very simple:
function MyController($scope, $http) {
// ... your code
}
Make sure there is an namespace definition (xmlns
) for the namespace your control belong to.
xmlns:myControls="clr-namespace:YourCustomNamespace.Controls;assembly=YourAssemblyName"
<myControls:thecontrol/>
def fact(n, total=1):
while True:
if n == 1:
return total
n, total = n - 1, total * n
cProfile.run('fact(126000)')
4 function calls in 5.164 seconds
Using the stack is convenient(like recursive call), but it comes at a cost: storing detailed information can take up a lot of memory.
If the stack is high, it means that the computer stores a lot of information about function calls.
The method only takes up constant memory(like iteration).
def fact(n):
result = 1
for i in range(2, n + 1):
result *= i
return result
cProfile.run('fact(126000)')
4 function calls in 4.708 seconds
def fact(n):
return math.factorial(n)
cProfile.run('fact(126000)')
5 function calls in 0.272 seconds
I think I have an improvement to Mike Weller's excellent work.
Add a Run Script phase before the Compile Sources
phase containing this bash:
# This if-statement means we'll only run the main script if the
# CommonCrypto.framework directory doesn't exist because otherwise
# the rest of the script causes a full recompile for anything
# where CommonCrypto is a dependency
# Do a "Clean Build Folder" to remove this directory and trigger
# the rest of the script to run
FRAMEWORK_DIR="${BUILT_PRODUCTS_DIR}/CommonCrypto.framework"
if [ -d "${FRAMEWORK_DIR}" ]; then
echo "${FRAMEWORK_DIR} already exists, so skipping the rest of the script."
exit 0
fi
mkdir -p "${FRAMEWORK_DIR}/Modules"
cat <<EOF > "${FRAMEWORK_DIR}/Modules/module.modulemap"
module CommonCrypto [system] {
header "${SDKROOT}/usr/include/CommonCrypto/CommonCrypto.h"
export *
}
EOF
ln -sf "${SDKROOT}/usr/include/CommonCrypto" "${FRAMEWORK_DIR}/Headers"
This script constructs a bare bones framework with the module.map in the correct place and then relies on Xcode's automatic search of BUILT_PRODUCTS_DIR
for frameworks.
I linked the original CommonCrypto include folder as the framework's Headers folder so the result should also function for Objective C projects.
public class DuplicationNoInArray {
/**
* @param args
* the command line arguments
*/
public static void main(String[] args) throws Exception {
int[] arr = { 1, 2, 3, 4, 5, 1, 2, 8 };
int[] result = new int[10];
int counter = 0, count = 0;
for (int i = 0; i < arr.length; i++) {
boolean isDistinct = false;
for (int j = 0; j < i; j++) {
if (arr[i] == arr[j]) {
isDistinct = true;
break;
}
}
if (!isDistinct) {
result[counter++] = arr[i];
}
}
for (int i = 0; i < counter; i++) {
count = 0;
for (int j = 0; j < arr.length; j++) {
if (result[i] == arr[j]) {
count++;
}
}
System.out.println(result[i] + " = " + count);
}
}
}
As a general rule of thumb, I use 1 cm margins when producing pdfs. I work in the geospatial industry and produce pdf maps that reference a specific geographic scale. Therefore, I do not have the option to 'fit document to printable area,' because this would make the reference scale inaccurate. You must also realize that when you fit to printable area, you are fitting your already existing margins inside the printer margins, so you end up with double margins. Make your margins the right size and your documents will print perfectly. Many modern printers can print with margins less than 3 mm, so 1 cm as a general rule should be sufficient. However, if it is a high profile job, get the specs of the printer you will be printing with and ensure that your margins are adequate. All you need is the brand and model number and you can find spec sheets through a google search.
If the variable required to be final, cannot be then you can assign the value of the variable to another variable and make THAT final so you can use it instead.
I used it installing the plugin "babel-plugin-inline-json-import" and then in .balberc add the plugin.
Install plugin
npm install --save-dev babel-plugin-inline-json-import
Config plugin in babelrc
"plugin": [ "inline-json-import" ]
And this is the code where I use it
import es from './es.json'
import en from './en.json'
export const dictionary = { es, en }
<script type="text/javascript">
var jvalue = 'this is javascript value';
<?php $abc = "<script>document.write(jvalue)</script>"?>
</script>
<?php echo 'php_'.$abc;?>
Best source of information for all of your DOM woes
http://www.w3.org/TR/dom/#nodes
"Objects implementing the Document, DocumentFragment, DocumentType, Element, Text, ProcessingInstruction, or Comment interface (simply called nodes) participate in a tree."
http://www.w3.org/TR/dom/#element
"Element nodes are simply known as elements."
I hope this will help.
<a href="url"><button>SomeText</button></a>
There is a well know relation between time and space complexity.
First of all, time is an obvious bound to space consumption: in time t you cannot reach more than O(t) memory cells. This is usually expressed by the inclusion
DTime(f) ? DSpace(f)
where DTime(f) and DSpace(f) are the set of languages recognizable by a deterministic Turing machine in time (respectively, space) O(f). That is to say that if a problem can be solved in time O(f), then it can also be solved in space O(f).
Less evident is the fact that space provides a bound to time. Suppose that, on an input of size n, you have at your disposal f(n) memory cells, comprising registers, caches and everything. After having written these cells in all possible ways you may eventually stop your computation, since otherwise you would reenter a configuration you already went through, starting to loop. Now, on a binary alphabet, f(n) cells can be written in 2^f(n) different ways, that gives our time upper bound: either the computation will stop within this bound, or you may force termination, since the computation will never stop.
This is usually expressed in the inclusion
DSpace(f) ? Dtime(2^(cf))
for some constant c. the reason of the constant c is that if L is in DSpace(f) you only know that it will be recognized in Space O(f), while in the previous reasoning, f was an actual bound.
The above relations are subsumed by stronger versions, involving nondeterministic models of computation, that is the way they are frequently stated in textbooks (see e.g. Theorem 7.4 in Computational Complexity by Papadimitriou).
This how I solved my problem to convert chars like \uFE0F, \u000A, etc. And also emojis that encoded with 16 bytes.
example = 'raw vegan chocolate cocoa pie w chocolate & vanilla cream\\uD83D\\uDE0D\\uD83D\\uDE0D\\u2764\\uFE0F Present Moment Caf\\u00E8 in St.Augustine\\u2764\\uFE0F\\u2764\\uFE0F '
import codecs
new_str = codecs.unicode_escape_decode(example)[0]
print(new_str)
>>> 'raw vegan chocolate cocoa pie w chocolate & vanilla cream\ud83d\ude0d\ud83d\ude0d?? Present Moment Cafè in St.Augustine???? '
new_new_str = new_str.encode('utf-16', 'surrogatepass').decode('utf-16')
print(new_new_str)
>>> 'raw vegan chocolate cocoa pie w chocolate & vanilla cream?? Present Moment Cafè in St.Augustine???? '
You can use np.where
to match the boolean conditions corresponding to Nan
values of the array and map
each outcome to generate a list of tuples
.
>>>list(map(tuple, np.where(np.isnan(x))))
[(1, 2), (2, 0)]
sudo apt-get update
sudo apt-get install openjdk-8-jdk
this should work
$cRepo = $em->getRepository('KaleLocationBundle:Country');
// Leave the first array blank
$countries = $cRepo->findBy(array(), array('name'=>'asc'));
Also the following command works:
describe SELECT * FROM table_name;
Where the select statement can be replaced with any other select statement, which is quite useful for complex inserts with select for example.
Obj C:
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@property (nonatomic) UITextView *textView;
@end
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
@synthesize textView;
- (void)viewDidLoad{
[super viewDidLoad];
[self.view setBackgroundColor:[UIColor grayColor]];
self.textView = [[UITextView alloc] initWithFrame:CGRectMake(30,10,250,20)];
self.textView.delegate = self;
[self.view addSubview:self.textView];
}
- (void)didReceiveMemoryWarning{
[super didReceiveMemoryWarning];
}
- (void)textViewDidChange:(UITextView *)txtView{
float height = txtView.contentSize.height;
[UITextView beginAnimations:nil context:nil];
[UITextView setAnimationDuration:0.5];
CGRect frame = txtView.frame;
frame.size.height = height + 10.0; //Give it some padding
txtView.frame = frame;
[UITextView commitAnimations];
}
@end
I looked all over for an easy solution and found this code that worked for me. The right
div is a third column which I left in for readability sake.
Here is the HTML:
<div class="container">
<div class="row">
<div class="left">
<p>PHONE & FAX:</p>
</div>
<div class="middle">
<p>+43 99 554 28 53</p>
</div>
<div class="right"> </div>
</div>
<div class="row">
<div class="left">
<p>Cellphone Gert:</p>
</div>
<div class="middle">
<p>+43 99 302 52 32</p>
</div>
<div class="right"> </div>
</div>
<div class="row">
<div class="left">
<p>Cellphone Petra:</p>
</div>
<div class="middle">
<p>+43 99 739 38 84</p>
</div>
<div class="right"> </div>
</div>
</div>
And the CSS:
.container {
display: table;
}
.row {
display: table-row;
}
.left, .right, .middle {
display: table-cell;
padding-right: 25px;
}
.left p, .right p, .middle p {
margin: 1px 1px;
}
For Windows:
$ ssh-keygen -t rsa -b 4096 -C [email protected]
Different browsers do this differently.
First open console window by right clicking on the page and selecting "Inspect Element", or by hitting F12.
In the console, type...
Firefox
functionName.toSource()
Chrome
functionName
Here is the code how to get extract the hashmap into arrays, hashmap that contains arraylist
Map<String, List<String>> country_hashmap = new HashMap<String, List<String>>();
//Creating two lists and inserting some data in it
List<String> list_1 = new ArrayList<String>();
list_1.add("16873538.webp");
list_1.add("16873539.webp");
List<String> list_2 = new ArrayList<String>();
list_2.add("16873540.webp");
list_2.add("16873541.webp");
//Inserting both the lists and key to the Map
country_hashmap.put("Malaysia", list_1);
country_hashmap.put("Japanese", list_2);
for(Map.Entry<String, List<String>> hashmap_data : country_hashmap.entrySet()){
String key = hashmap_data.getKey(); // contains the keys
List<String> val = hashmap_data.getValue(); // contains arraylists
// print all the key and values in the hashmap
System.out.println(key + ": " +val);
// using interator to get the specific values arraylists
Iterator<String> itr = val.iterator();
int i = 0;
String[] data = new String[val.size()];
while (itr.hasNext()){
String array = itr.next();
data[i] = array;
System.out.println(data[i]); // GET THE VALUE
i++;
}
}
Functional answer using flatMap:
const getPermutationsFor = (arr, permutation = []) =>
arr.length === 0
? [permutation]
: arr.flatMap((item, i, arr) =>
getPermutationsFor(
arr.filter((_,j) => j !== i),
[...permutation, item]
)
);
Just use.
<![CDATA[ your text here ]]>
This will allow any characters except the ending
]]>
So you can include characters that would be illegal such as & and >. For example.
<element><![CDATA[ characters such as & and > are allowed ]]></element>
However, attributes will need to be escaped as CDATA blocks can not be used for them.
For Mac Users
I am using Mac and I was facing the same problem while I was trying to push a project from Android Studio. The reason for that is another user had previously logged into GitHub and his credentials were saved in Keychain Access.
The solution is to delete all the information store in keychain for that process
Resolved the issue... you need to add the ssh public key to your github account.
ssh-keygen
~/.ssh/id_rsa
)~/.ssh/id_rsa.pub
) to github account git clone
. It works!
Initial status (public key not added to git hub account)
foo@bn18-251:~$ rm -rf test foo@bn18-251:~$ ls foo@bn18-251:~$ git clone [email protected]:devendra-d-chavan/test.git Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly foo@bn18-251:~$
Now, add the public key ~/.ssh/id_rsa.pub
to the github account (I used cat ~/.ssh/id_rsa.pub
)
foo@bn18-251:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/foo/.ssh/id_rsa): Created directory '/home/foo/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/foo/.ssh/id_rsa. Your public key has been saved in /home/foo/.ssh/id_rsa.pub. The key fingerprint is: xxxxx The key's randomart image is: +--[ RSA 2048]----+ xxxxx +-----------------+ foo@bn18-251:~$ cat ./.ssh/id_rsa.pub xxxxx foo@bn18-251:~$ git clone [email protected]:devendra-d-chavan/test.git Cloning into 'test'... The authenticity of host 'github.com (207.97.227.239)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. Enter passphrase for key '/home/foo/.ssh/id_rsa': warning: You appear to have cloned an empty repository. foo@bn18-251:~$ ls test foo@bn18-251:~/test$ git status # On branch master # # Initial commit # nothing to commit (create/copy files and use "git add" to track)
Late answer, but I found out that this is the simplest solution (if you don't use jQuery):
var myClasses = document.querySelectorAll('.my-class'),
i = 0,
l = myClasses.length;
for (i; i < l; i++) {
myClasses[i].style.display = 'none';
}
Exact implementation of Environment.NewLine
from the source code:
The implementation in .NET 4.6.1:
/*===================================NewLine====================================
**Action: A property which returns the appropriate newline string for the given
** platform.
**Returns: \r\n on Win32.
**Arguments: None.
**Exceptions: None.
==============================================================================*/
public static String NewLine {
get {
Contract.Ensures(Contract.Result<String>() != null);
return "\r\n";
}
}
The implementation in .NET Core:
/*===================================NewLine====================================
**Action: A property which returns the appropriate newline string for the
** given platform.
**Returns: \r\n on Win32.
**Arguments: None.
**Exceptions: None.
==============================================================================*/
public static String NewLine {
get {
Contract.Ensures(Contract.Result() != null);
#if !PLATFORM_UNIX
return "\r\n";
#else
return "\n";
#endif // !PLATFORM_UNIX
}
}
source (in System.Private.CoreLib
)
public static string NewLine => "\r\n";
source (in System.Runtime.Extensions
)
Here is what I ended up doing and it worked great.
First I moved the file input outside of the form so that it is not submitted:
<input name="imagefile[]" type="file" id="takePictureField" accept="image/*" onchange="uploadPhotos(\'#{imageUploadUrl}\')" />
<form id="uploadImageForm" enctype="multipart/form-data">
<input id="name" value="#{name}" />
... a few more inputs ...
</form>
Then I changed the uploadPhotos
function to handle only the resizing:
window.uploadPhotos = function(url){
// Read in file
var file = event.target.files[0];
// Ensure it's an image
if(file.type.match(/image.*/)) {
console.log('An image has been loaded');
// Load the image
var reader = new FileReader();
reader.onload = function (readerEvent) {
var image = new Image();
image.onload = function (imageEvent) {
// Resize the image
var canvas = document.createElement('canvas'),
max_size = 544,// TODO : pull max size from a site config
width = image.width,
height = image.height;
if (width > height) {
if (width > max_size) {
height *= max_size / width;
width = max_size;
}
} else {
if (height > max_size) {
width *= max_size / height;
height = max_size;
}
}
canvas.width = width;
canvas.height = height;
canvas.getContext('2d').drawImage(image, 0, 0, width, height);
var dataUrl = canvas.toDataURL('image/jpeg');
var resizedImage = dataURLToBlob(dataUrl);
$.event.trigger({
type: "imageResized",
blob: resizedImage,
url: dataUrl
});
}
image.src = readerEvent.target.result;
}
reader.readAsDataURL(file);
}
};
As you can see I'm using canvas.toDataURL('image/jpeg');
to change the resized image into a dataUrl adn then I call the function dataURLToBlob(dataUrl);
to turn the dataUrl into a blob that I can then append to the form. When the blob is created, I trigger a custom event. Here is the function to create the blob:
/* Utility function to convert a canvas to a BLOB */
var dataURLToBlob = function(dataURL) {
var BASE64_MARKER = ';base64,';
if (dataURL.indexOf(BASE64_MARKER) == -1) {
var parts = dataURL.split(',');
var contentType = parts[0].split(':')[1];
var raw = parts[1];
return new Blob([raw], {type: contentType});
}
var parts = dataURL.split(BASE64_MARKER);
var contentType = parts[0].split(':')[1];
var raw = window.atob(parts[1]);
var rawLength = raw.length;
var uInt8Array = new Uint8Array(rawLength);
for (var i = 0; i < rawLength; ++i) {
uInt8Array[i] = raw.charCodeAt(i);
}
return new Blob([uInt8Array], {type: contentType});
}
/* End Utility function to convert a canvas to a BLOB */
Finally, here is my event handler that takes the blob from the custom event, appends the form and then submits it.
/* Handle image resized events */
$(document).on("imageResized", function (event) {
var data = new FormData($("form[id*='uploadImageForm']")[0]);
if (event.blob && event.url) {
data.append('image_data', event.blob);
$.ajax({
url: event.url,
data: data,
cache: false,
contentType: false,
processData: false,
type: 'POST',
success: function(data){
//handle errors...
}
});
}
});
Its better if you use validation code to the users input for making it restricted to use symbols and part of code in your input form. If you embeed php in html code your php code have to become on the top to make sure that it is not ignored as comment if a hacker edit the page and add /* in your html code
A few others that have not been mentioned:
For mini pies, lines and bars, Peity is brilliant, simple, tiny, fast, uses really elegant markup.
I'm not sure of it's relationship with Flot (given its name), but Flotr2 is pretty good, certainly does better pies than Flot.
Bluff produces nice-looking line graphs, but I had a bit of trouble with its pies.
Not what I was after, but another commercial product (much like Highcharts) is TeeChart.